Test Report: QEMU_macOS 15585

                    
                      431a2968588dcb34d31f7f4fc0380544d7f85afa:2023-07-19:30225
                    
                

Test fail (91/255)

Order failed test Duration
3 TestDownloadOnly/v1.16.0/json-events 21
7 TestDownloadOnly/v1.16.0/kubectl 0
20 TestOffline 9.88
24 TestAddons/parallel/Registry 720.88
25 TestAddons/parallel/Ingress 0.73
26 TestAddons/parallel/InspektorGadget 480.83
30 TestAddons/parallel/CSI 720.91
32 TestAddons/parallel/CloudSpanner 819.15
37 TestCertOptions 10.13
38 TestCertExpiration 196.77
39 TestDockerFlags 9.89
40 TestForceSystemdFlag 11.45
41 TestForceSystemdEnv 10.04
86 TestFunctional/parallel/ServiceCmdConnect 39.07
123 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.17
153 TestImageBuild/serial/BuildWithBuildArg 1.04
162 TestIngressAddonLegacy/serial/ValidateIngressAddons 56.28
197 TestMountStart/serial/StartWithMountFirst 10.29
200 TestMultiNode/serial/FreshStart2Nodes 9.94
201 TestMultiNode/serial/DeployApp2Nodes 109.06
202 TestMultiNode/serial/PingHostFrom2Pods 0.08
203 TestMultiNode/serial/AddNode 0.07
204 TestMultiNode/serial/ProfileList 0.17
205 TestMultiNode/serial/CopyFile 0.06
206 TestMultiNode/serial/StopNode 0.13
207 TestMultiNode/serial/StartAfterStop 0.1
208 TestMultiNode/serial/RestartKeepsNodes 5.36
209 TestMultiNode/serial/DeleteNode 0.1
210 TestMultiNode/serial/StopMultiNode 0.14
211 TestMultiNode/serial/RestartMultiNode 5.25
212 TestMultiNode/serial/ValidateNameConflict 20.13
216 TestPreload 10.06
218 TestScheduledStopUnix 9.89
219 TestSkaffold 13.13
222 TestRunningBinaryUpgrade 179
224 TestKubernetesUpgrade 15.29
237 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.4
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.13
239 TestStoppedBinaryUpgrade/Setup 171.71
241 TestPause/serial/Start 9.77
251 TestNoKubernetes/serial/StartWithK8s 9.85
252 TestNoKubernetes/serial/StartWithStopK8s 5.47
253 TestNoKubernetes/serial/Start 5.47
257 TestNoKubernetes/serial/StartNoArgs 5.46
259 TestNetworkPlugins/group/auto/Start 9.78
260 TestNetworkPlugins/group/kindnet/Start 10.06
261 TestNetworkPlugins/group/flannel/Start 9.99
262 TestNetworkPlugins/group/enable-default-cni/Start 9.79
263 TestNetworkPlugins/group/bridge/Start 9.63
264 TestNetworkPlugins/group/kubenet/Start 9.85
265 TestNetworkPlugins/group/custom-flannel/Start 9.69
266 TestNetworkPlugins/group/calico/Start 9.88
267 TestNetworkPlugins/group/false/Start 9.78
269 TestStartStop/group/old-k8s-version/serial/FirstStart 9.85
270 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
271 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
274 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
275 TestStoppedBinaryUpgrade/Upgrade 1.46
276 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
277 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
278 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
279 TestStartStop/group/old-k8s-version/serial/Pause 0.1
280 TestStoppedBinaryUpgrade/MinikubeLogs 0.08
282 TestStartStop/group/no-preload/serial/FirstStart 9.89
284 TestStartStop/group/embed-certs/serial/FirstStart 12.06
285 TestStartStop/group/no-preload/serial/DeployApp 0.1
286 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
289 TestStartStop/group/no-preload/serial/SecondStart 6.96
290 TestStartStop/group/embed-certs/serial/DeployApp 0.09
291 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
294 TestStartStop/group/embed-certs/serial/SecondStart 5.22
295 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
296 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.05
297 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
298 TestStartStop/group/no-preload/serial/Pause 0.1
299 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
300 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
301 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
302 TestStartStop/group/embed-certs/serial/Pause 0.1
304 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.79
306 TestStartStop/group/newest-cni/serial/FirstStart 12.08
307 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
308 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.14
311 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 7.25
316 TestStartStop/group/newest-cni/serial/SecondStart 5.2
317 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
318 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.05
319 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
320 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
323 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
324 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.16.0/json-events (21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-744000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-744000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 : exit status 40 (20.99642825s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"55438efb-6d46-403c-96e5-d409a0e8bce3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-744000] minikube v1.31.0 on Darwin 13.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8e6764e2-5727-42ff-8d1b-ae122d3921ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15585"}}
	{"specversion":"1.0","id":"43f76d49-80e5-4b5f-bac5-273060049da7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig"}}
	{"specversion":"1.0","id":"dc348b26-8bce-4152-9dd6-ce9172e37747","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"eafc708f-cc18-46e9-ac12-fc0185a90717","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c0acb1c3-09e6-48ad-ad91-a095a41da0ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube"}}
	{"specversion":"1.0","id":"7492f5b4-651b-4908-8ba9-59c61e345cef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"1c5cc83f-456b-423c-8b65-7262b631061d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"dd786f41-016a-4d03-b733-e7d96a02cf00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"5c73c9d7-9e2f-4df8-9e3b-1c024ffb619e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d8d61701-4995-4f67-951b-ba0b52169435","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-744000 in cluster download-only-744000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"362e3b31-fdf0-4642-869c-2d5715258c31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.16.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7d9b6296-bf98-4023-98c1-f6bf61575ad4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/15585-1056/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107f606b0 0x107f606b0 0x107f606b0 0x107f606b0 0x107f606b0 0x107f606b0 0x107f606b0] Decompressors:map[bz2:0x14000122848 gz:0x140001228a0 tar:0x14000122850 tar.bz2:0x14000122860 tar.gz:0x14000122870 tar.xz:0x14000122880 tar.zst:0x14000122890 tbz2:0x14000122860 tgz:0x140001
22870 txz:0x14000122880 tzst:0x14000122890 xz:0x140001228a8 zip:0x140001228b0 zst:0x140001228c0] Getters:map[file:0x1400065cc30 http:0x14000a4a190 https:0x14000a4a1e0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"992ea9bb-b4bd-44c7-a626-de5cb14b70d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 15:51:12.099826    1472 out.go:296] Setting OutFile to fd 1 ...
	I0719 15:51:12.099941    1472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 15:51:12.099946    1472 out.go:309] Setting ErrFile to fd 2...
	I0719 15:51:12.099949    1472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 15:51:12.100064    1472 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	W0719 15:51:12.100124    1472 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/15585-1056/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15585-1056/.minikube/config/config.json: no such file or directory
	I0719 15:51:12.101214    1472 out.go:303] Setting JSON to true
	I0719 15:51:12.117276    1472 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1243,"bootTime":1689805829,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 15:51:12.117355    1472 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 15:51:12.122226    1472 out.go:97] [download-only-744000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 15:51:12.125171    1472 out.go:169] MINIKUBE_LOCATION=15585
	I0719 15:51:12.122360    1472 notify.go:220] Checking for updates...
	W0719 15:51:12.122367    1472 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball: no such file or directory
	I0719 15:51:12.131133    1472 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 15:51:12.134170    1472 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 15:51:12.137136    1472 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 15:51:12.140165    1472 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	W0719 15:51:12.146117    1472 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 15:51:12.146319    1472 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 15:51:12.152220    1472 out.go:97] Using the qemu2 driver based on user configuration
	I0719 15:51:12.152243    1472 start.go:298] selected driver: qemu2
	I0719 15:51:12.152246    1472 start.go:880] validating driver "qemu2" against <nil>
	I0719 15:51:12.152334    1472 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0719 15:51:12.154333    1472 out.go:169] Automatically selected the socket_vmnet network
	I0719 15:51:12.159385    1472 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0719 15:51:12.159455    1472 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 15:51:12.159508    1472 cni.go:84] Creating CNI manager for ""
	I0719 15:51:12.159522    1472 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0719 15:51:12.159531    1472 start_flags.go:319] config:
	{Name:download-only-744000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-744000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: N
etworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 15:51:12.165087    1472 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:51:12.169194    1472 out.go:97] Downloading VM boot image ...
	I0719 15:51:12.169226    1472 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso
	I0719 15:51:22.049392    1472 out.go:97] Starting control plane node download-only-744000 in cluster download-only-744000
	I0719 15:51:22.049404    1472 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0719 15:51:22.146428    1472 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0719 15:51:22.146469    1472 cache.go:57] Caching tarball of preloaded images
	I0719 15:51:22.146678    1472 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0719 15:51:22.151790    1472 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0719 15:51:22.151799    1472 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0719 15:51:22.378262    1472 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0719 15:51:31.802635    1472 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0719 15:51:31.802771    1472 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0719 15:51:32.442477    1472 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0719 15:51:32.442679    1472 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/download-only-744000/config.json ...
	I0719 15:51:32.442699    1472 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/download-only-744000/config.json: {Name:mk9bad5674d07bb0011804ae23f3f05ea64dfd4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:51:32.442930    1472 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0719 15:51:32.443154    1472 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0719 15:51:33.027281    1472 out.go:169] 
	W0719 15:51:33.032206    1472 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/15585-1056/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107f606b0 0x107f606b0 0x107f606b0 0x107f606b0 0x107f606b0 0x107f606b0 0x107f606b0] Decompressors:map[bz2:0x14000122848 gz:0x140001228a0 tar:0x14000122850 tar.bz2:0x14000122860 tar.gz:0x14000122870 tar.xz:0x14000122880 tar.zst:0x14000122890 tbz2:0x14000122860 tgz:0x14000122870 txz:0x14000122880 tzst:0x14000122890 xz:0x140001228a8 zip:0x140001228b0 zst:0x140001228c0] Getters:map[file:0x1400065cc30 http:0x14000a4a190 https:0x14000a4a1e0] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0719 15:51:33.032230    1472 out_reason.go:110] 
	W0719 15:51:33.039302    1472 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 15:51:33.043265    1472 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:71: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-744000" "--force" "--alsologtostderr" "--kubernetes-version=v1.16.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.16.0/json-events (21.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:160: expected the file for binary exist at "/Users/jenkins/minikube-integration/15585-1056/.minikube/cache/darwin/arm64/v1.16.0/kubectl" but got error stat /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/darwin/arm64/v1.16.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.88s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-089000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-089000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.747294708s)

                                                
                                                
-- stdout --
	* [offline-docker-089000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node offline-docker-089000 in cluster offline-docker-089000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-089000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:37:14.039923    3569 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:37:14.040054    3569 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:37:14.040057    3569 out.go:309] Setting ErrFile to fd 2...
	I0719 16:37:14.040060    3569 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:37:14.040194    3569 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:37:14.041253    3569 out.go:303] Setting JSON to false
	I0719 16:37:14.058297    3569 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4005,"bootTime":1689805829,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:37:14.058381    3569 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:37:14.063440    3569 out.go:177] * [offline-docker-089000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:37:14.071249    3569 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:37:14.071299    3569 notify.go:220] Checking for updates...
	I0719 16:37:14.078294    3569 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:37:14.081243    3569 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:37:14.084295    3569 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:37:14.087333    3569 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:37:14.090200    3569 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:37:14.093589    3569 config.go:182] Loaded profile config "multinode-992000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:37:14.093650    3569 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:37:14.097234    3569 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 16:37:14.104258    3569 start.go:298] selected driver: qemu2
	I0719 16:37:14.104265    3569 start.go:880] validating driver "qemu2" against <nil>
	I0719 16:37:14.104272    3569 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:37:14.106050    3569 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0719 16:37:14.109233    3569 out.go:177] * Automatically selected the socket_vmnet network
	I0719 16:37:14.112342    3569 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 16:37:14.112362    3569 cni.go:84] Creating CNI manager for ""
	I0719 16:37:14.112381    3569 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 16:37:14.112387    3569 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 16:37:14.112395    3569 start_flags.go:319] config:
	{Name:offline-docker-089000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:offline-docker-089000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:37:14.116831    3569 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:37:14.124362    3569 out.go:177] * Starting control plane node offline-docker-089000 in cluster offline-docker-089000
	I0719 16:37:14.128233    3569 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 16:37:14.128264    3569 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 16:37:14.128272    3569 cache.go:57] Caching tarball of preloaded images
	I0719 16:37:14.128334    3569 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 16:37:14.128340    3569 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0719 16:37:14.128413    3569 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/offline-docker-089000/config.json ...
	I0719 16:37:14.128430    3569 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/offline-docker-089000/config.json: {Name:mk8cadbc0ecb33daa875f1126515e34ea3323731 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:37:14.128612    3569 start.go:365] acquiring machines lock for offline-docker-089000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:37:14.128640    3569 start.go:369] acquired machines lock for "offline-docker-089000" in 21.416µs
	I0719 16:37:14.128650    3569 start.go:93] Provisioning new machine with config: &{Name:offline-docker-089000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:offli
ne-docker-089000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:37:14.128691    3569 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:37:14.135277    3569 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0719 16:37:14.149308    3569 start.go:159] libmachine.API.Create for "offline-docker-089000" (driver="qemu2")
	I0719 16:37:14.149338    3569 client.go:168] LocalClient.Create starting
	I0719 16:37:14.149404    3569 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:37:14.149425    3569 main.go:141] libmachine: Decoding PEM data...
	I0719 16:37:14.149439    3569 main.go:141] libmachine: Parsing certificate...
	I0719 16:37:14.149492    3569 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:37:14.149511    3569 main.go:141] libmachine: Decoding PEM data...
	I0719 16:37:14.149521    3569 main.go:141] libmachine: Parsing certificate...
	I0719 16:37:14.149878    3569 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:37:14.266306    3569 main.go:141] libmachine: Creating SSH key...
	I0719 16:37:14.398747    3569 main.go:141] libmachine: Creating Disk image...
	I0719 16:37:14.398754    3569 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:37:14.398891    3569 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/offline-docker-089000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/offline-docker-089000/disk.qcow2
	I0719 16:37:14.407389    3569 main.go:141] libmachine: STDOUT: 
	I0719 16:37:14.407423    3569 main.go:141] libmachine: STDERR: 
	I0719 16:37:14.407482    3569 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/offline-docker-089000/disk.qcow2 +20000M
	I0719 16:37:14.415814    3569 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:37:14.415832    3569 main.go:141] libmachine: STDERR: 
	I0719 16:37:14.415852    3569 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/offline-docker-089000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/offline-docker-089000/disk.qcow2
	I0719 16:37:14.415860    3569 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:37:14.415894    3569 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/offline-docker-089000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/offline-docker-089000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/offline-docker-089000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:88:f4:aa:3d:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/offline-docker-089000/disk.qcow2
	I0719 16:37:14.417795    3569 main.go:141] libmachine: STDOUT: 
	I0719 16:37:14.417811    3569 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:37:14.417831    3569 client.go:171] LocalClient.Create took 268.493417ms
	I0719 16:37:16.419863    3569 start.go:128] duration metric: createHost completed in 2.291204125s
	I0719 16:37:16.419886    3569 start.go:83] releasing machines lock for "offline-docker-089000", held for 2.291282459s
	W0719 16:37:16.419902    3569 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:37:16.430031    3569 out.go:177] * Deleting "offline-docker-089000" in qemu2 ...
	W0719 16:37:16.443180    3569 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:37:16.443194    3569 start.go:687] Will try again in 5 seconds ...
	I0719 16:37:21.445313    3569 start.go:365] acquiring machines lock for offline-docker-089000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:37:21.445802    3569 start.go:369] acquired machines lock for "offline-docker-089000" in 368.834µs
	I0719 16:37:21.445965    3569 start.go:93] Provisioning new machine with config: &{Name:offline-docker-089000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:offli
ne-docker-089000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:37:21.446327    3569 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:37:21.454076    3569 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0719 16:37:21.502686    3569 start.go:159] libmachine.API.Create for "offline-docker-089000" (driver="qemu2")
	I0719 16:37:21.502749    3569 client.go:168] LocalClient.Create starting
	I0719 16:37:21.502921    3569 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:37:21.502995    3569 main.go:141] libmachine: Decoding PEM data...
	I0719 16:37:21.503019    3569 main.go:141] libmachine: Parsing certificate...
	I0719 16:37:21.503109    3569 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:37:21.503143    3569 main.go:141] libmachine: Decoding PEM data...
	I0719 16:37:21.503158    3569 main.go:141] libmachine: Parsing certificate...
	I0719 16:37:21.503810    3569 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:37:21.629647    3569 main.go:141] libmachine: Creating SSH key...
	I0719 16:37:21.707430    3569 main.go:141] libmachine: Creating Disk image...
	I0719 16:37:21.707436    3569 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:37:21.707577    3569 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/offline-docker-089000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/offline-docker-089000/disk.qcow2
	I0719 16:37:21.715898    3569 main.go:141] libmachine: STDOUT: 
	I0719 16:37:21.715912    3569 main.go:141] libmachine: STDERR: 
	I0719 16:37:21.715957    3569 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/offline-docker-089000/disk.qcow2 +20000M
	I0719 16:37:21.723092    3569 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:37:21.723106    3569 main.go:141] libmachine: STDERR: 
	I0719 16:37:21.723119    3569 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/offline-docker-089000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/offline-docker-089000/disk.qcow2
	I0719 16:37:21.723128    3569 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:37:21.723157    3569 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/offline-docker-089000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/offline-docker-089000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/offline-docker-089000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:97:c5:85:7e:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/offline-docker-089000/disk.qcow2
	I0719 16:37:21.724650    3569 main.go:141] libmachine: STDOUT: 
	I0719 16:37:21.724663    3569 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:37:21.724676    3569 client.go:171] LocalClient.Create took 221.926625ms
	I0719 16:37:23.726747    3569 start.go:128] duration metric: createHost completed in 2.280427709s
	I0719 16:37:23.726771    3569 start.go:83] releasing machines lock for "offline-docker-089000", held for 2.280968833s
	W0719 16:37:23.726895    3569 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-089000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-089000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:37:23.735221    3569 out.go:177] 
	W0719 16:37:23.739167    3569 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:37:23.739174    3569 out.go:239] * 
	* 
	W0719 16:37:23.739824    3569 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:37:23.750186    3569 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-089000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:522: *** TestOffline FAILED at 2023-07-19 16:37:23.760509 -0700 PDT m=+2771.810101335
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-089000 -n offline-docker-089000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-089000 -n offline-docker-089000: exit status 7 (31.724125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-089000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-089000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-089000
--- FAIL: TestOffline (9.88s)

                                                
                                    
x
+
TestAddons/parallel/Registry (720.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:304: failed waiting for registry replicacontroller to stabilize: timed out waiting for the condition
addons_test.go:306: registry stabilized in 6m0.001392875s
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
addons_test.go:308: ***** TestAddons/parallel/Registry: pod "actual-registry=true" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-101000 -n addons-101000
addons_test.go:308: TestAddons/parallel/Registry: showing logs for failed pods as of 2023-07-19 16:10:30.385795 -0700 PDT m=+1158.379486168
addons_test.go:309: failed waiting for pod actual-registry: actual-registry=true within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-101000 -n addons-101000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-101000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-744000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT |                     |
	|         | -p download-only-744000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-744000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT |                     |
	|         | -p download-only-744000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT | 19 Jul 23 15:51 PDT |
	| delete  | -p download-only-744000        | download-only-744000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT | 19 Jul 23 15:51 PDT |
	| delete  | -p download-only-744000        | download-only-744000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT | 19 Jul 23 15:51 PDT |
	| start   | --download-only -p             | binary-mirror-101000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT |                     |
	|         | binary-mirror-101000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49310         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-101000        | binary-mirror-101000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT | 19 Jul 23 15:51 PDT |
	| start   | -p addons-101000               | addons-101000        | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT | 19 Jul 23 15:58 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-101000        | jenkins | v1.31.0 | 19 Jul 23 16:10 PDT |                     |
	|         | addons-101000                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/19 15:51:46
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 15:51:46.194297    1544 out.go:296] Setting OutFile to fd 1 ...
	I0719 15:51:46.194422    1544 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 15:51:46.194425    1544 out.go:309] Setting ErrFile to fd 2...
	I0719 15:51:46.194428    1544 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 15:51:46.194533    1544 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 15:51:46.195601    1544 out.go:303] Setting JSON to false
	I0719 15:51:46.210650    1544 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1277,"bootTime":1689805829,"procs":414,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 15:51:46.210718    1544 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 15:51:46.215551    1544 out.go:177] * [addons-101000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 15:51:46.222492    1544 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 15:51:46.226347    1544 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 15:51:46.222559    1544 notify.go:220] Checking for updates...
	I0719 15:51:46.229495    1544 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 15:51:46.232495    1544 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 15:51:46.235514    1544 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 15:51:46.238453    1544 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 15:51:46.241624    1544 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 15:51:46.245483    1544 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 15:51:46.252546    1544 start.go:298] selected driver: qemu2
	I0719 15:51:46.252550    1544 start.go:880] validating driver "qemu2" against <nil>
	I0719 15:51:46.252555    1544 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 15:51:46.254378    1544 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0719 15:51:46.257444    1544 out.go:177] * Automatically selected the socket_vmnet network
	I0719 15:51:46.260545    1544 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:51:46.260571    1544 cni.go:84] Creating CNI manager for ""
	I0719 15:51:46.260577    1544 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 15:51:46.260582    1544 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 15:51:46.260588    1544 start_flags.go:319] config:
	{Name:addons-101000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-101000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:c
ni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 15:51:46.264618    1544 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:51:46.272375    1544 out.go:177] * Starting control plane node addons-101000 in cluster addons-101000
	I0719 15:51:46.276444    1544 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 15:51:46.276475    1544 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 15:51:46.276490    1544 cache.go:57] Caching tarball of preloaded images
	I0719 15:51:46.276551    1544 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 15:51:46.276557    1544 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0719 15:51:46.276783    1544 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/config.json ...
	I0719 15:51:46.276796    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/config.json: {Name:mk5e2042adc5d3df20329816c5917e6964724b56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:51:46.277018    1544 start.go:365] acquiring machines lock for addons-101000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 15:51:46.277112    1544 start.go:369] acquired machines lock for "addons-101000" in 87.917µs
	I0719 15:51:46.277123    1544 start.go:93] Provisioning new machine with config: &{Name:addons-101000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-101000
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 15:51:46.277150    1544 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 15:51:46.284454    1544 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0719 15:51:46.605350    1544 start.go:159] libmachine.API.Create for "addons-101000" (driver="qemu2")
	I0719 15:51:46.605388    1544 client.go:168] LocalClient.Create starting
	I0719 15:51:46.605532    1544 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 15:51:46.811960    1544 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 15:51:46.895754    1544 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 15:51:47.292066    1544 main.go:141] libmachine: Creating SSH key...
	I0719 15:51:47.506919    1544 main.go:141] libmachine: Creating Disk image...
	I0719 15:51:47.506931    1544 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 15:51:47.507226    1544 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/disk.qcow2
	I0719 15:51:47.541355    1544 main.go:141] libmachine: STDOUT: 
	I0719 15:51:47.541387    1544 main.go:141] libmachine: STDERR: 
	I0719 15:51:47.541456    1544 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/disk.qcow2 +20000M
	I0719 15:51:47.548773    1544 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 15:51:47.548786    1544 main.go:141] libmachine: STDERR: 
	I0719 15:51:47.548801    1544 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/disk.qcow2
	I0719 15:51:47.548806    1544 main.go:141] libmachine: Starting QEMU VM...
	I0719 15:51:47.548843    1544 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:3a:02:96:05:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/disk.qcow2
	I0719 15:51:47.614276    1544 main.go:141] libmachine: STDOUT: 
	I0719 15:51:47.614314    1544 main.go:141] libmachine: STDERR: 
	I0719 15:51:47.614319    1544 main.go:141] libmachine: Attempt 0
	I0719 15:51:47.614337    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:49.616493    1544 main.go:141] libmachine: Attempt 1
	I0719 15:51:49.616586    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:51.618782    1544 main.go:141] libmachine: Attempt 2
	I0719 15:51:51.618840    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:53.620905    1544 main.go:141] libmachine: Attempt 3
	I0719 15:51:53.620918    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:55.622923    1544 main.go:141] libmachine: Attempt 4
	I0719 15:51:55.622935    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:57.624701    1544 main.go:141] libmachine: Attempt 5
	I0719 15:51:57.624722    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:59.626797    1544 main.go:141] libmachine: Attempt 6
	I0719 15:51:59.626823    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:59.626967    1544 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0719 15:51:59.626997    1544 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:3a:2:96:5:da ID:1,36:3a:2:96:5:da Lease:0x64b9ba8e}
	I0719 15:51:59.627004    1544 main.go:141] libmachine: Found match: 36:3a:2:96:5:da
	I0719 15:51:59.627013    1544 main.go:141] libmachine: IP: 192.168.105.2
	I0719 15:51:59.627019    1544 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0719 15:52:01.648988    1544 machine.go:88] provisioning docker machine ...
	I0719 15:52:01.649049    1544 buildroot.go:166] provisioning hostname "addons-101000"
	I0719 15:52:01.650570    1544 main.go:141] libmachine: Using SSH client type: native
	I0719 15:52:01.651376    1544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10258d170] 0x10258fbd0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0719 15:52:01.651395    1544 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-101000 && echo "addons-101000" | sudo tee /etc/hostname
	I0719 15:52:01.738896    1544 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-101000
	
	I0719 15:52:01.739019    1544 main.go:141] libmachine: Using SSH client type: native
	I0719 15:52:01.739522    1544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10258d170] 0x10258fbd0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0719 15:52:01.739541    1544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-101000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-101000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-101000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:52:01.809650    1544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:52:01.809665    1544 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15585-1056/.minikube CaCertPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15585-1056/.minikube}
	I0719 15:52:01.809675    1544 buildroot.go:174] setting up certificates
	I0719 15:52:01.809702    1544 provision.go:83] configureAuth start
	I0719 15:52:01.809710    1544 provision.go:138] copyHostCerts
	I0719 15:52:01.809873    1544 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15585-1056/.minikube/key.pem (1675 bytes)
	I0719 15:52:01.810191    1544 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.pem (1082 bytes)
	I0719 15:52:01.810327    1544 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15585-1056/.minikube/cert.pem (1123 bytes)
	I0719 15:52:01.810465    1544 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca-key.pem org=jenkins.addons-101000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-101000]
	I0719 15:52:01.879682    1544 provision.go:172] copyRemoteCerts
	I0719 15:52:01.879750    1544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:52:01.879766    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:01.912803    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:52:01.919846    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0719 15:52:01.926696    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 15:52:01.934065    1544 provision.go:86] duration metric: configureAuth took 124.357417ms
	I0719 15:52:01.934074    1544 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:52:01.934167    1544 config.go:182] Loaded profile config "addons-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 15:52:01.934205    1544 main.go:141] libmachine: Using SSH client type: native
	I0719 15:52:01.934418    1544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10258d170] 0x10258fbd0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0719 15:52:01.934423    1544 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 15:52:01.991251    1544 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 15:52:01.991259    1544 buildroot.go:70] root file system type: tmpfs
	I0719 15:52:01.991320    1544 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 15:52:01.991364    1544 main.go:141] libmachine: Using SSH client type: native
	I0719 15:52:01.991596    1544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10258d170] 0x10258fbd0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0719 15:52:01.991636    1544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 15:52:02.049859    1544 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 15:52:02.049895    1544 main.go:141] libmachine: Using SSH client type: native
	I0719 15:52:02.050139    1544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10258d170] 0x10258fbd0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0719 15:52:02.050148    1544 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 15:52:02.386931    1544 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 15:52:02.386943    1544 machine.go:91] provisioned docker machine in 737.934792ms
	I0719 15:52:02.386948    1544 client.go:171] LocalClient.Create took 15.78172825s
	I0719 15:52:02.386964    1544 start.go:167] duration metric: libmachine.API.Create for "addons-101000" took 15.781794083s
	I0719 15:52:02.386972    1544 start.go:300] post-start starting for "addons-101000" (driver="qemu2")
	I0719 15:52:02.386977    1544 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:52:02.387049    1544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:52:02.387060    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:02.417331    1544 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:52:02.418797    1544 info.go:137] Remote host: Buildroot 2021.02.12
	I0719 15:52:02.418804    1544 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15585-1056/.minikube/addons for local assets ...
	I0719 15:52:02.418866    1544 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15585-1056/.minikube/files for local assets ...
	I0719 15:52:02.418892    1544 start.go:303] post-start completed in 31.917459ms
	I0719 15:52:02.419241    1544 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/config.json ...
	I0719 15:52:02.419386    1544 start.go:128] duration metric: createHost completed in 16.142407666s
	I0719 15:52:02.419424    1544 main.go:141] libmachine: Using SSH client type: native
	I0719 15:52:02.419636    1544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10258d170] 0x10258fbd0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0719 15:52:02.419640    1544 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:52:02.473900    1544 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689807122.468027460
	
	I0719 15:52:02.473908    1544 fix.go:206] guest clock: 1689807122.468027460
	I0719 15:52:02.473913    1544 fix.go:219] Guest: 2023-07-19 15:52:02.46802746 -0700 PDT Remote: 2023-07-19 15:52:02.419389 -0700 PDT m=+16.243794293 (delta=48.63846ms)
	I0719 15:52:02.473924    1544 fix.go:190] guest clock delta is within tolerance: 48.63846ms
	I0719 15:52:02.473927    1544 start.go:83] releasing machines lock for "addons-101000", held for 16.196985625s
	I0719 15:52:02.474254    1544 ssh_runner.go:195] Run: cat /version.json
	I0719 15:52:02.474266    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:02.474283    1544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:52:02.474307    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:02.504138    1544 ssh_runner.go:195] Run: systemctl --version
	I0719 15:52:02.506807    1544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:52:02.547286    1544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:52:02.547337    1544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:52:02.552537    1544 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:52:02.552544    1544 start.go:466] detecting cgroup driver to use...
	I0719 15:52:02.552636    1544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:52:02.558292    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 15:52:02.561659    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 15:52:02.565184    1544 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 15:52:02.565214    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 15:52:02.568252    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 15:52:02.571117    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 15:52:02.574394    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 15:52:02.578183    1544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:52:02.581670    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 15:52:02.585324    1544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:52:02.588146    1544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:52:02.590822    1544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:02.668293    1544 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 15:52:02.674070    1544 start.go:466] detecting cgroup driver to use...
	I0719 15:52:02.674127    1544 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 15:52:02.680608    1544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:52:02.685038    1544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:52:02.692942    1544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:52:02.697559    1544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 15:52:02.702616    1544 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 15:52:02.743950    1544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 15:52:02.749347    1544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:52:02.754962    1544 ssh_runner.go:195] Run: which cri-dockerd
	I0719 15:52:02.756311    1544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 15:52:02.758805    1544 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 15:52:02.763719    1544 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 15:52:02.840623    1544 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 15:52:02.914223    1544 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 15:52:02.914238    1544 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0719 15:52:02.919565    1544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:02.997357    1544 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 15:52:04.154714    1544 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.157354834s)
	I0719 15:52:04.154783    1544 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 15:52:04.233603    1544 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 15:52:04.316105    1544 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 15:52:04.398081    1544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:04.476954    1544 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 15:52:04.483791    1544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:04.563625    1544 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0719 15:52:04.586684    1544 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 15:52:04.586782    1544 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 15:52:04.589057    1544 start.go:534] Will wait 60s for crictl version
	I0719 15:52:04.589109    1544 ssh_runner.go:195] Run: which crictl
	I0719 15:52:04.590411    1544 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:52:04.605474    1544 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0719 15:52:04.605558    1544 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 15:52:04.615366    1544 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 15:52:04.635830    1544 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.4 ...
	I0719 15:52:04.635983    1544 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0719 15:52:04.637460    1544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:52:04.641595    1544 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 15:52:04.641637    1544 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 15:52:04.650900    1544 docker.go:636] Got preloaded images: 
	I0719 15:52:04.650907    1544 docker.go:642] registry.k8s.io/kube-apiserver:v1.27.3 wasn't preloaded
	I0719 15:52:04.650940    1544 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 15:52:04.654168    1544 ssh_runner.go:195] Run: which lz4
	I0719 15:52:04.655512    1544 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 15:52:04.656801    1544 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 15:52:04.656815    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635922 bytes)
	I0719 15:52:05.915036    1544 docker.go:600] Took 1.259592 seconds to copy over tarball
	I0719 15:52:05.915105    1544 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 15:52:06.974144    1544 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.059035792s)
	I0719 15:52:06.974158    1544 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 15:52:06.989746    1544 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 15:52:06.993185    1544 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0719 15:52:06.998174    1544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:07.075272    1544 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 15:52:09.295448    1544 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.220180667s)
	I0719 15:52:09.295552    1544 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 15:52:09.301832    1544 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0719 15:52:09.301841    1544 cache_images.go:84] Images are preloaded, skipping loading
	I0719 15:52:09.301925    1544 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0719 15:52:09.309268    1544 cni.go:84] Creating CNI manager for ""
	I0719 15:52:09.309280    1544 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 15:52:09.309309    1544 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0719 15:52:09.309319    1544 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-101000 NodeName:addons-101000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 15:52:09.309384    1544 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-101000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:52:09.309419    1544 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-101000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:addons-101000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0719 15:52:09.309476    1544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0719 15:52:09.312676    1544 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:52:09.312711    1544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:52:09.315418    1544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0719 15:52:09.320330    1544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 15:52:09.325480    1544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0719 15:52:09.330778    1544 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0719 15:52:09.332164    1544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:52:09.335590    1544 certs.go:56] Setting up /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000 for IP: 192.168.105.2
	I0719 15:52:09.335612    1544 certs.go:190] acquiring lock for shared ca certs: {Name:mk57268b94adc82cb06ba056d8f0acecf538b87f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.335779    1544 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.key
	I0719 15:52:09.375531    1544 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.crt ...
	I0719 15:52:09.375537    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.crt: {Name:mk18dc73651ebb7586f5cc870528fe59bb3eaca5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.375716    1544 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.key ...
	I0719 15:52:09.375718    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.key: {Name:mkf4847c0170d0ed2e02012567d5849b7cdc3e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.375829    1544 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.key
	I0719 15:52:09.479964    1544 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.crt ...
	I0719 15:52:09.479968    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.crt: {Name:mk931f43b9aeac1a637bc02f03d26df5c2c21559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.480104    1544 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.key ...
	I0719 15:52:09.480107    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.key: {Name:mkbbced5a2200a63ea6918cadfce8d25c9e09696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.480228    1544 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.key
	I0719 15:52:09.480236    1544 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt with IP's: []
	I0719 15:52:09.550153    1544 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt ...
	I0719 15:52:09.550157    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt: {Name:mkfbd0ec0d392f0ad08f01bd61787ea0a90ba52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.550273    1544 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.key ...
	I0719 15:52:09.550276    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.key: {Name:mk8c74eed8437e78eaa33e4b6b240669ae86a824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.550378    1544 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.key.96055969
	I0719 15:52:09.550392    1544 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0719 15:52:09.700054    1544 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.crt.96055969 ...
	I0719 15:52:09.700063    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.crt.96055969: {Name:mkf03671886dbbbb632ec2e172f912e064d8e1ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.700299    1544 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.key.96055969 ...
	I0719 15:52:09.700303    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.key.96055969: {Name:mk303c08d6b543a4cd38e9de14800a408d1d2869 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.700416    1544 certs.go:337] copying /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.crt
	I0719 15:52:09.700598    1544 certs.go:341] copying /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.key
	I0719 15:52:09.700687    1544 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.key
	I0719 15:52:09.700696    1544 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.crt with IP's: []
	I0719 15:52:09.741490    1544 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.crt ...
	I0719 15:52:09.741493    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.crt: {Name:mk78bf5cf7588d5f6faf8ac273455bded2325b7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.741610    1544 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.key ...
	I0719 15:52:09.741617    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.key: {Name:mkd247533695e3682be8e4d6fb67fe0e52efd3f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.741840    1544 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 15:52:09.741864    1544 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:52:09.741886    1544 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:52:09.741911    1544 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/key.pem (1675 bytes)
	I0719 15:52:09.742194    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0719 15:52:09.749753    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 15:52:09.756925    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:52:09.764081    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 15:52:09.770653    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:52:09.777938    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 15:52:09.785429    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:52:09.792823    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:52:09.799580    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:52:09.806238    1544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:52:09.812452    1544 ssh_runner.go:195] Run: openssl version
	I0719 15:52:09.814304    1544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:52:09.817773    1544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:52:09.819362    1544 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 19 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:52:09.819381    1544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:52:09.821228    1544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:52:09.824165    1544 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0719 15:52:09.825567    1544 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0719 15:52:09.825608    1544 kubeadm.go:404] StartCluster: {Name:addons-101000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-101000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 15:52:09.825669    1544 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 15:52:09.831218    1544 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:52:09.834469    1544 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:52:09.837516    1544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:52:09.840473    1544 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:52:09.840487    1544 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 15:52:09.861549    1544 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0719 15:52:09.861581    1544 kubeadm.go:322] [preflight] Running pre-flight checks
	I0719 15:52:09.914251    1544 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 15:52:09.914308    1544 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 15:52:09.914371    1544 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 15:52:09.979760    1544 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 15:52:09.988950    1544 out.go:204]   - Generating certificates and keys ...
	I0719 15:52:09.988981    1544 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0719 15:52:09.989014    1544 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0719 15:52:10.135496    1544 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 15:52:10.224881    1544 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0719 15:52:10.328051    1544 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0719 15:52:10.529629    1544 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0719 15:52:10.721019    1544 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0719 15:52:10.721090    1544 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-101000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0719 15:52:10.787563    1544 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0719 15:52:10.787619    1544 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-101000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0719 15:52:10.835004    1544 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 15:52:10.905361    1544 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 15:52:10.998864    1544 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0719 15:52:10.998890    1544 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 15:52:11.030652    1544 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 15:52:11.128642    1544 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 15:52:11.289310    1544 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 15:52:11.400745    1544 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 15:52:11.407437    1544 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 15:52:11.407496    1544 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 15:52:11.407517    1544 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0719 15:52:11.489012    1544 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 15:52:11.495210    1544 out.go:204]   - Booting up control plane ...
	I0719 15:52:11.495279    1544 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 15:52:11.495324    1544 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 15:52:11.495356    1544 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 15:52:11.495394    1544 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 15:52:11.496331    1544 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 15:52:15.497794    1544 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.001180 seconds
	I0719 15:52:15.497922    1544 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 15:52:15.503484    1544 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 15:52:16.021124    1544 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 15:52:16.021263    1544 kubeadm.go:322] [mark-control-plane] Marking the node addons-101000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 15:52:16.527825    1544 kubeadm.go:322] [bootstrap-token] Using token: za2ad5.mzmzgft4t0cdmv0r
	I0719 15:52:16.534544    1544 out.go:204]   - Configuring RBAC rules ...
	I0719 15:52:16.534604    1544 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 15:52:16.535903    1544 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 15:52:16.539838    1544 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 15:52:16.541602    1544 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 15:52:16.542999    1544 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 15:52:16.544402    1544 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 15:52:16.549043    1544 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 15:52:16.720317    1544 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0719 15:52:16.937953    1544 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0719 15:52:16.938484    1544 kubeadm.go:322] 
	I0719 15:52:16.938519    1544 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0719 15:52:16.938527    1544 kubeadm.go:322] 
	I0719 15:52:16.938563    1544 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0719 15:52:16.938588    1544 kubeadm.go:322] 
	I0719 15:52:16.938601    1544 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0719 15:52:16.938634    1544 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 15:52:16.938667    1544 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 15:52:16.938672    1544 kubeadm.go:322] 
	I0719 15:52:16.938695    1544 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0719 15:52:16.938698    1544 kubeadm.go:322] 
	I0719 15:52:16.938723    1544 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 15:52:16.938730    1544 kubeadm.go:322] 
	I0719 15:52:16.938754    1544 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0719 15:52:16.938790    1544 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 15:52:16.938841    1544 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 15:52:16.938844    1544 kubeadm.go:322] 
	I0719 15:52:16.938885    1544 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 15:52:16.938940    1544 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0719 15:52:16.938947    1544 kubeadm.go:322] 
	I0719 15:52:16.938990    1544 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token za2ad5.mzmzgft4t0cdmv0r \
	I0719 15:52:16.939068    1544 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ac94190d213987345f5634b8d3500a45c06dcf10dacd514b79727b35e18f9f90 \
	I0719 15:52:16.939079    1544 kubeadm.go:322] 	--control-plane 
	I0719 15:52:16.939082    1544 kubeadm.go:322] 
	I0719 15:52:16.939124    1544 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0719 15:52:16.939129    1544 kubeadm.go:322] 
	I0719 15:52:16.939171    1544 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token za2ad5.mzmzgft4t0cdmv0r \
	I0719 15:52:16.939230    1544 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ac94190d213987345f5634b8d3500a45c06dcf10dacd514b79727b35e18f9f90 
	I0719 15:52:16.939287    1544 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 15:52:16.939293    1544 cni.go:84] Creating CNI manager for ""
	I0719 15:52:16.939300    1544 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 15:52:16.943663    1544 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 15:52:16.946263    1544 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 15:52:16.949272    1544 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0719 15:52:16.953970    1544 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 15:52:16.954045    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:16.954043    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=fd270bde4b7e946c995f2329996587ae45fe53d4 minikube.k8s.io/name=addons-101000 minikube.k8s.io/updated_at=2023_07_19T15_52_16_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:17.018155    1544 ops.go:34] apiserver oom_adj: -16
	I0719 15:52:17.018193    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:17.554061    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:18.053987    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:18.552765    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:19.054043    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:19.552096    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:20.054238    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:20.554269    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:21.054238    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:21.554235    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:22.054268    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:22.554252    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:23.054226    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:23.554175    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:24.054171    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:24.553977    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:25.054181    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:25.553942    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:26.053975    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:26.553905    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:27.053866    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:27.553341    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:28.052551    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:28.553332    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:29.053858    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:29.553880    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:30.052491    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:30.101801    1544 kubeadm.go:1081] duration metric: took 13.147921333s to wait for elevateKubeSystemPrivileges.
	I0719 15:52:30.101816    1544 kubeadm.go:406] StartCluster complete in 20.276428833s
	I0719 15:52:30.101824    1544 settings.go:142] acquiring lock: {Name:mk58631521ffd49c3231a31589bcae3549c3b53c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:30.101973    1544 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 15:52:30.102164    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/kubeconfig: {Name:mk508b0ad49e7803e8fd5dcb96b45d1248a097b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:30.102360    1544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0719 15:52:30.102400    1544 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0719 15:52:30.102441    1544 addons.go:69] Setting volumesnapshots=true in profile "addons-101000"
	I0719 15:52:30.102447    1544 addons.go:231] Setting addon volumesnapshots=true in "addons-101000"
	I0719 15:52:30.102465    1544 addons.go:69] Setting metrics-server=true in profile "addons-101000"
	I0719 15:52:30.102475    1544 addons.go:231] Setting addon metrics-server=true in "addons-101000"
	I0719 15:52:30.102477    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102506    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102510    1544 addons.go:69] Setting ingress=true in profile "addons-101000"
	I0719 15:52:30.102533    1544 addons.go:69] Setting ingress-dns=true in profile "addons-101000"
	I0719 15:52:30.102557    1544 addons.go:231] Setting addon ingress=true in "addons-101000"
	I0719 15:52:30.102537    1544 addons.go:69] Setting inspektor-gadget=true in profile "addons-101000"
	I0719 15:52:30.102572    1544 addons.go:231] Setting addon inspektor-gadget=true in "addons-101000"
	I0719 15:52:30.102587    1544 addons.go:69] Setting registry=true in profile "addons-101000"
	I0719 15:52:30.102609    1544 addons.go:231] Setting addon registry=true in "addons-101000"
	I0719 15:52:30.102627    1544 addons.go:231] Setting addon ingress-dns=true in "addons-101000"
	I0719 15:52:30.102650    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102671    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102679    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102685    1544 config.go:182] Loaded profile config "addons-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 15:52:30.102713    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102729    1544 addons.go:69] Setting storage-provisioner=true in profile "addons-101000"
	I0719 15:52:30.102734    1544 addons.go:231] Setting addon storage-provisioner=true in "addons-101000"
	I0719 15:52:30.102749    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102918    1544 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-101000"
	I0719 15:52:30.102930    1544 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-101000"
	I0719 15:52:30.102932    1544 addons.go:69] Setting cloud-spanner=true in profile "addons-101000"
	I0719 15:52:30.102942    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102946    1544 addons.go:231] Setting addon cloud-spanner=true in "addons-101000"
	I0719 15:52:30.102975    1544 host.go:66] Checking if "addons-101000" exists ...
	W0719 15:52:30.103028    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	W0719 15:52:30.103039    1544 addons.go:277] "addons-101000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0719 15:52:30.103042    1544 addons.go:467] Verifying addon ingress=true in "addons-101000"
	I0719 15:52:30.106383    1544 out.go:177] * Verifying ingress addon...
	W0719 15:52:30.103028    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	W0719 15:52:30.103167    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	W0719 15:52:30.103182    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	I0719 15:52:30.103217    1544 addons.go:69] Setting default-storageclass=true in profile "addons-101000"
	W0719 15:52:30.103227    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	W0719 15:52:30.103229    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	I0719 15:52:30.103236    1544 addons.go:69] Setting gcp-auth=true in profile "addons-101000"
	W0719 15:52:30.103647    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	W0719 15:52:30.115464    1544 addons.go:277] "addons-101000" is not running, setting registry=true and skipping enablement (err=<nil>)
	W0719 15:52:30.115470    1544 addons.go:277] "addons-101000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	W0719 15:52:30.115473    1544 addons.go:277] "addons-101000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W0719 15:52:30.115476    1544 addons.go:277] "addons-101000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0719 15:52:30.115487    1544 mustload.go:65] Loading cluster: addons-101000
	W0719 15:52:30.115490    1544 addons.go:277] "addons-101000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	I0719 15:52:30.115530    1544 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-101000"
	W0719 15:52:30.115554    1544 addons.go:277] "addons-101000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	I0719 15:52:30.115896    1544 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0719 15:52:30.118371    1544 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0719 15:52:30.125455    1544 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0719 15:52:30.125463    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0719 15:52:30.125471    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:30.122419    1544 addons.go:467] Verifying addon registry=true in "addons-101000"
	I0719 15:52:30.122435    1544 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.3
	I0719 15:52:30.133254    1544 out.go:177] * Verifying registry addon...
	I0719 15:52:30.122534    1544 config.go:182] Loaded profile config "addons-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 15:52:30.122472    1544 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-101000"
	I0719 15:52:30.126814    1544 addons.go:231] Setting addon default-storageclass=true in "addons-101000"
	I0719 15:52:30.127301    1544 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0719 15:52:30.129401    1544 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 15:52:30.136476    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 15:52:30.136487    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:30.139357    1544 out.go:177] * Verifying csi-hostpath-driver addon...
	I0719 15:52:30.136602    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.137003    1544 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0719 15:52:30.137526    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.146971    1544 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0719 15:52:30.147364    1544 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 15:52:30.147370    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 15:52:30.147377    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:30.154043    1544 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0719 15:52:30.155110    1544 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0719 15:52:30.167928    1544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0719 15:52:30.178534    1544 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0719 15:52:30.178543    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0719 15:52:30.183588    1544 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0719 15:52:30.183597    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0719 15:52:30.191691    1544 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0719 15:52:30.191699    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0719 15:52:30.203614    1544 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 15:52:30.203623    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0719 15:52:30.208695    1544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 15:52:30.219630    1544 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 15:52:30.219641    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 15:52:30.228005    1544 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0719 15:52:30.228015    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0719 15:52:30.232889    1544 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 15:52:30.232896    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0719 15:52:30.237389    1544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 15:52:30.293905    1544 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:52:30.293918    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 15:52:30.323074    1544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:52:30.620570    1544 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-101000" context rescaled to 1 replicas
	I0719 15:52:30.620588    1544 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 15:52:30.627928    1544 out.go:177] * Verifying Kubernetes components...
	I0719 15:52:30.631983    1544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:30.771514    1544 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	W0719 15:52:30.880793    1544 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0719 15:52:30.880817    1544 retry.go:31] will retry after 267.649411ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0719 15:52:30.944103    1544 addons.go:467] Verifying addon metrics-server=true in "addons-101000"
	I0719 15:52:30.944554    1544 node_ready.go:35] waiting up to 6m0s for node "addons-101000" to be "Ready" ...
	I0719 15:52:30.946971    1544 node_ready.go:49] node "addons-101000" has status "Ready":"True"
	I0719 15:52:30.946984    1544 node_ready.go:38] duration metric: took 2.412542ms waiting for node "addons-101000" to be "Ready" ...
	I0719 15:52:30.946988    1544 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:30.951440    1544 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-4dqmf" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:31.148622    1544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 15:52:32.960900    1544 pod_ready.go:102] pod "coredns-5d78c9869d-4dqmf" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:33.460293    1544 pod_ready.go:92] pod "coredns-5d78c9869d-4dqmf" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:33.460303    1544 pod_ready.go:81] duration metric: took 2.50887925s waiting for pod "coredns-5d78c9869d-4dqmf" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.460308    1544 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-knvd5" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.463445    1544 pod_ready.go:92] pod "coredns-5d78c9869d-knvd5" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:33.463453    1544 pod_ready.go:81] duration metric: took 3.140459ms waiting for pod "coredns-5d78c9869d-knvd5" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.463458    1544 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.466021    1544 pod_ready.go:92] pod "etcd-addons-101000" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:33.466025    1544 pod_ready.go:81] duration metric: took 2.564083ms waiting for pod "etcd-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.466029    1544 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.468619    1544 pod_ready.go:92] pod "kube-apiserver-addons-101000" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:33.468625    1544 pod_ready.go:81] duration metric: took 2.592875ms waiting for pod "kube-apiserver-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.468629    1544 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.471097    1544 pod_ready.go:92] pod "kube-controller-manager-addons-101000" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:33.471103    1544 pod_ready.go:81] duration metric: took 2.47075ms waiting for pod "kube-controller-manager-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.471106    1544 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jpdlk" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.691576    1544 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.542959542s)
	I0719 15:52:33.857829    1544 pod_ready.go:92] pod "kube-proxy-jpdlk" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:33.857838    1544 pod_ready.go:81] duration metric: took 386.731917ms waiting for pod "kube-proxy-jpdlk" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.857843    1544 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:34.259787    1544 pod_ready.go:92] pod "kube-scheduler-addons-101000" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:34.259798    1544 pod_ready.go:81] duration metric: took 401.956208ms waiting for pod "kube-scheduler-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:34.259802    1544 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-844d8db974-vt8ml" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:36.666794    1544 pod_ready.go:102] pod "metrics-server-844d8db974-vt8ml" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:36.752105    1544 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0719 15:52:36.752122    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:36.786675    1544 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0719 15:52:36.794559    1544 addons.go:231] Setting addon gcp-auth=true in "addons-101000"
	I0719 15:52:36.794583    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:36.795348    1544 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0719 15:52:36.795361    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:36.829668    1544 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0719 15:52:36.833634    1544 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0719 15:52:36.837615    1544 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0719 15:52:36.837621    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0719 15:52:36.843015    1544 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0719 15:52:36.843020    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0719 15:52:36.847964    1544 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0719 15:52:36.847971    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0719 15:52:36.853694    1544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0719 15:52:37.162175    1544 pod_ready.go:92] pod "metrics-server-844d8db974-vt8ml" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:37.162186    1544 pod_ready.go:81] duration metric: took 2.902411833s waiting for pod "metrics-server-844d8db974-vt8ml" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:37.162191    1544 pod_ready.go:38] duration metric: took 6.215265042s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:37.162200    1544 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:52:37.162261    1544 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:52:37.213333    1544 api_server.go:72] duration metric: took 6.592798333s to wait for apiserver process to appear ...
	I0719 15:52:37.213345    1544 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:52:37.213352    1544 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0719 15:52:37.213987    1544 addons.go:467] Verifying addon gcp-auth=true in "addons-101000"
	I0719 15:52:37.217265    1544 out.go:177] * Verifying gcp-auth addon...
	I0719 15:52:37.224564    1544 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0719 15:52:37.226285    1544 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0719 15:52:37.227830    1544 api_server.go:141] control plane version: v1.27.3
	I0719 15:52:37.227837    1544 api_server.go:131] duration metric: took 14.489167ms to wait for apiserver health ...
	I0719 15:52:37.227841    1544 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:52:37.231500    1544 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0719 15:52:37.231508    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:37.232481    1544 system_pods.go:59] 10 kube-system pods found
	I0719 15:52:37.232488    1544 system_pods.go:61] "coredns-5d78c9869d-4dqmf" [3335e5bb-cd1b-4f88-8662-5f52c470886e] Running
	I0719 15:52:37.232491    1544 system_pods.go:61] "coredns-5d78c9869d-knvd5" [8982b7a7-b4c0-4d8f-aaac-24b2aa68727d] Running
	I0719 15:52:37.232493    1544 system_pods.go:61] "etcd-addons-101000" [76e23cae-d673-4ce3-8c1a-28173c4ad7fb] Running
	I0719 15:52:37.232495    1544 system_pods.go:61] "kube-apiserver-addons-101000" [b5b978e3-f407-4753-974c-733ac90e64b3] Running
	I0719 15:52:37.232497    1544 system_pods.go:61] "kube-controller-manager-addons-101000" [d4160a3e-e6e8-450b-8484-7fe93b6a935c] Running
	I0719 15:52:37.232500    1544 system_pods.go:61] "kube-proxy-jpdlk" [e6fd914c-3322-4343-bfc3-e20cc61a9c1b] Running
	I0719 15:52:37.232503    1544 system_pods.go:61] "kube-scheduler-addons-101000" [22d2fcb7-43d8-4531-958e-506bd7b1d4b6] Running
	I0719 15:52:37.232506    1544 system_pods.go:61] "metrics-server-844d8db974-vt8ml" [a25120a0-a3f2-4a32-851b-21a7b451818f] Running
	I0719 15:52:37.232510    1544 system_pods.go:61] "snapshot-controller-75bbb956b9-9qbz2" [449931f4-f646-44cb-b8ae-3246b6e14db1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 15:52:37.232514    1544 system_pods.go:61] "snapshot-controller-75bbb956b9-gsppf" [358b5d0e-1f68-47f3-bc7d-ebbf387b2e40] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 15:52:37.232518    1544 system_pods.go:74] duration metric: took 4.674833ms to wait for pod list to return data ...
	I0719 15:52:37.232523    1544 default_sa.go:34] waiting for default service account to be created ...
	I0719 15:52:37.235194    1544 default_sa.go:45] found service account: "default"
	I0719 15:52:37.235203    1544 default_sa.go:55] duration metric: took 2.676875ms for default service account to be created ...
	I0719 15:52:37.235207    1544 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 15:52:37.262055    1544 system_pods.go:86] 10 kube-system pods found
	I0719 15:52:37.262063    1544 system_pods.go:89] "coredns-5d78c9869d-4dqmf" [3335e5bb-cd1b-4f88-8662-5f52c470886e] Running
	I0719 15:52:37.262066    1544 system_pods.go:89] "coredns-5d78c9869d-knvd5" [8982b7a7-b4c0-4d8f-aaac-24b2aa68727d] Running
	I0719 15:52:37.262069    1544 system_pods.go:89] "etcd-addons-101000" [76e23cae-d673-4ce3-8c1a-28173c4ad7fb] Running
	I0719 15:52:37.262071    1544 system_pods.go:89] "kube-apiserver-addons-101000" [b5b978e3-f407-4753-974c-733ac90e64b3] Running
	I0719 15:52:37.262073    1544 system_pods.go:89] "kube-controller-manager-addons-101000" [d4160a3e-e6e8-450b-8484-7fe93b6a935c] Running
	I0719 15:52:37.262075    1544 system_pods.go:89] "kube-proxy-jpdlk" [e6fd914c-3322-4343-bfc3-e20cc61a9c1b] Running
	I0719 15:52:37.262078    1544 system_pods.go:89] "kube-scheduler-addons-101000" [22d2fcb7-43d8-4531-958e-506bd7b1d4b6] Running
	I0719 15:52:37.262080    1544 system_pods.go:89] "metrics-server-844d8db974-vt8ml" [a25120a0-a3f2-4a32-851b-21a7b451818f] Running
	I0719 15:52:37.262085    1544 system_pods.go:89] "snapshot-controller-75bbb956b9-9qbz2" [449931f4-f646-44cb-b8ae-3246b6e14db1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 15:52:37.262089    1544 system_pods.go:89] "snapshot-controller-75bbb956b9-gsppf" [358b5d0e-1f68-47f3-bc7d-ebbf387b2e40] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 15:52:37.262093    1544 system_pods.go:126] duration metric: took 26.883625ms to wait for k8s-apps to be running ...
	I0719 15:52:37.262096    1544 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 15:52:37.262153    1544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:37.267299    1544 system_svc.go:56] duration metric: took 5.200291ms WaitForService to wait for kubelet.
	I0719 15:52:37.267305    1544 kubeadm.go:581] duration metric: took 6.646776125s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0719 15:52:37.267313    1544 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:52:37.460427    1544 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0719 15:52:37.460461    1544 node_conditions.go:123] node cpu capacity is 2
	I0719 15:52:37.460466    1544 node_conditions.go:105] duration metric: took 193.152542ms to run NodePressure ...
	I0719 15:52:37.460471    1544 start.go:228] waiting for startup goroutines ...
	I0719 15:52:37.735761    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:38.234684    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:38.735371    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:39.235765    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:39.735907    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:40.235373    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:40.734681    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:41.235287    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:41.735207    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:42.235545    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:42.734812    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:43.235286    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:43.736840    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:44.235205    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:44.738509    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:45.235248    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:45.735521    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:46.236102    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:46.735748    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:47.235790    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:47.736196    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:48.235182    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:48.735185    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:49.237494    1544 kapi.go:107] duration metric: took 12.013055167s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0719 15:52:49.242337    1544 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-101000 cluster.
	I0719 15:52:49.247318    1544 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0719 15:52:49.251640    1544 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0719 15:58:30.120713    1544 kapi.go:107] duration metric: took 6m0.008647875s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0719 15:58:30.121029    1544 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0719 15:58:30.144635    1544 kapi.go:107] duration metric: took 6m0.001552791s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0719 15:58:30.144676    1544 kapi.go:107] duration metric: took 6m0.011563667s to wait for kubernetes.io/minikube-addons=registry ...
	W0719 15:58:30.144755    1544 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	W0719 15:58:30.144795    1544 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0719 15:58:30.152624    1544 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, cloud-spanner, inspektor-gadget, default-storageclass, metrics-server, volumesnapshots, gcp-auth
	I0719 15:58:30.164713    1544 addons.go:502] enable addons completed in 6m0.066202625s: enabled=[ingress-dns storage-provisioner cloud-spanner inspektor-gadget default-storageclass metrics-server volumesnapshots gcp-auth]
	I0719 15:58:30.164763    1544 start.go:233] waiting for cluster config update ...
	I0719 15:58:30.164788    1544 start.go:242] writing updated cluster config ...
	I0719 15:58:30.169625    1544 ssh_runner.go:195] Run: rm -f paused
	I0719 15:58:30.237703    1544 start.go:578] kubectl: 1.27.2, cluster: 1.27.3 (minor skew: 0)
	I0719 15:58:30.241711    1544 out.go:177] * Done! kubectl is now configured to use "addons-101000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-07-19 22:51:58 UTC, ends at Wed 2023-07-19 23:10:30 UTC. --
	Jul 19 22:52:43 addons-101000 dockerd[1155]: time="2023-07-19T22:52:43.120662630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 22:52:43 addons-101000 dockerd[1155]: time="2023-07-19T22:52:43.120679566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 22:52:43 addons-101000 dockerd[1155]: time="2023-07-19T22:52:43.182515855Z" level=info msg="shim disconnected" id=1951c60ed8efcc7337440102374206fec32f673446df831278ef60ba51446933 namespace=moby
	Jul 19 22:52:43 addons-101000 dockerd[1155]: time="2023-07-19T22:52:43.182544012Z" level=warning msg="cleaning up after shim disconnected" id=1951c60ed8efcc7337440102374206fec32f673446df831278ef60ba51446933 namespace=moby
	Jul 19 22:52:43 addons-101000 dockerd[1155]: time="2023-07-19T22:52:43.182548142Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 22:52:43 addons-101000 dockerd[1149]: time="2023-07-19T22:52:43.182620558Z" level=info msg="ignoring event" container=1951c60ed8efcc7337440102374206fec32f673446df831278ef60ba51446933 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 22:52:44 addons-101000 dockerd[1155]: time="2023-07-19T22:52:44.190930161Z" level=info msg="shim disconnected" id=feffc055991a5f9040fb2443f3d7d925f26478390a78d13e35fcf50a7b2bd9b3 namespace=moby
	Jul 19 22:52:44 addons-101000 dockerd[1149]: time="2023-07-19T22:52:44.191139552Z" level=info msg="ignoring event" container=feffc055991a5f9040fb2443f3d7d925f26478390a78d13e35fcf50a7b2bd9b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 22:52:44 addons-101000 dockerd[1155]: time="2023-07-19T22:52:44.191384231Z" level=warning msg="cleaning up after shim disconnected" id=feffc055991a5f9040fb2443f3d7d925f26478390a78d13e35fcf50a7b2bd9b3 namespace=moby
	Jul 19 22:52:44 addons-101000 dockerd[1155]: time="2023-07-19T22:52:44.191405671Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 22:52:45 addons-101000 dockerd[1149]: time="2023-07-19T22:52:45.210876361Z" level=info msg="ignoring event" container=ff082e616338f9ee177498a150f356af0b22b9e031e5c62344154819899b6b1b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 22:52:45 addons-101000 dockerd[1155]: time="2023-07-19T22:52:45.211082068Z" level=info msg="shim disconnected" id=ff082e616338f9ee177498a150f356af0b22b9e031e5c62344154819899b6b1b namespace=moby
	Jul 19 22:52:45 addons-101000 dockerd[1155]: time="2023-07-19T22:52:45.211367980Z" level=warning msg="cleaning up after shim disconnected" id=ff082e616338f9ee177498a150f356af0b22b9e031e5c62344154819899b6b1b namespace=moby
	Jul 19 22:52:45 addons-101000 dockerd[1155]: time="2023-07-19T22:52:45.211378115Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 22:52:45 addons-101000 dockerd[1155]: time="2023-07-19T22:52:45.336810596Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 22:52:45 addons-101000 dockerd[1155]: time="2023-07-19T22:52:45.336856434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 22:52:45 addons-101000 dockerd[1155]: time="2023-07-19T22:52:45.336888007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 22:52:45 addons-101000 dockerd[1155]: time="2023-07-19T22:52:45.336900019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 22:52:45 addons-101000 cri-dockerd[1051]: time="2023-07-19T22:52:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0ee1896a26320c3f2a0276be91800f69e284fd28f8830c62afec6149f3e01934/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 19 22:52:45 addons-101000 dockerd[1149]: time="2023-07-19T22:52:45.710423798Z" level=warning msg="reference for unknown type: " digest="sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Jul 19 22:52:48 addons-101000 cri-dockerd[1051]: time="2023-07-19T22:52:48Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Jul 19 22:52:48 addons-101000 dockerd[1155]: time="2023-07-19T22:52:48.502093407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 22:52:48 addons-101000 dockerd[1155]: time="2023-07-19T22:52:48.502123765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 22:52:48 addons-101000 dockerd[1155]: time="2023-07-19T22:52:48.502132147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 22:52:48 addons-101000 dockerd[1155]: time="2023-07-19T22:52:48.502138277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                     CREATED             STATE               NAME                         ATTEMPT             POD ID
	1cc3491bfa533       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf              17 minutes ago      Running             gcp-auth                     0                   0ee1896a26320
	2b9a59faa57f0       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   17 minutes ago      Running             volume-snapshot-controller   0                   4d989e719f82a
	d47989aa362eb       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   17 minutes ago      Running             volume-snapshot-controller   0                   16bf9bed7271f
	1916a4d50b456       registry.k8s.io/metrics-server/metrics-server@sha256:c60778fa1c44d0c5a0c4530ebe83f9243ee6fc02f4c3dc59226c201931350b10     17 minutes ago      Running             metrics-server               0                   6777dc48b461a
	0df05b2c74afc       97e04611ad434                                                                                                             17 minutes ago      Running             coredns                      0                   49ab25acb4281
	19588c52e552d       fb73e92641fd5                                                                                                             17 minutes ago      Running             kube-proxy                   0                   4d2bba9dbbd12
	f959c7f626d6e       24bc64e911039                                                                                                             18 minutes ago      Running             etcd                         0                   e9214702e68a5
	c6c632dd083f2       bcb9e554eaab6                                                                                                             18 minutes ago      Running             kube-scheduler               0                   956f93b928e2f
	862babcc9993e       ab3683b584ae5                                                                                                             18 minutes ago      Running             kube-controller-manager      0                   81af0dc9e0f17
	5984dda0d68af       39dfb036b0986                                                                                                             18 minutes ago      Running             kube-apiserver               0                   b8a23dc6dd212
	
	* 
	* ==> coredns [0df05b2c74af] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38035 - 43333 "HINFO IN 3178013197050500524.6871022848785512211. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004455439s
	[INFO] 10.244.0.9:44781 - 2820 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000096121s
	[INFO] 10.244.0.9:49458 - 31580 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000042327s
	[INFO] 10.244.0.9:47092 - 42379 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000038365s
	[INFO] 10.244.0.9:51547 - 25508 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000044495s
	[INFO] 10.244.0.9:33952 - 7053 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000043995s
	[INFO] 10.244.0.9:46272 - 9283 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000029482s
	[INFO] 10.244.0.9:44297 - 58626 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001146116s
	[INFO] 10.244.0.9:36915 - 33133 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001094699s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-101000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-101000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd270bde4b7e946c995f2329996587ae45fe53d4
	                    minikube.k8s.io/name=addons-101000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_19T15_52_16_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Jul 2023 22:52:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-101000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Jul 2023 23:10:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Jul 2023 23:08:36 +0000   Wed, 19 Jul 2023 22:52:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Jul 2023 23:08:36 +0000   Wed, 19 Jul 2023 22:52:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Jul 2023 23:08:36 +0000   Wed, 19 Jul 2023 22:52:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Jul 2023 23:08:36 +0000   Wed, 19 Jul 2023 22:52:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-101000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 06f7171a2e3b478b8a006d3ed11bcad4
	  System UUID:                06f7171a2e3b478b8a006d3ed11bcad4
	  Boot ID:                    388cb244-002c-43f0-bc4d-d5cefb6c596c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-58478865f7-hfg7x                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 coredns-5d78c9869d-knvd5                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     18m
	  kube-system                 etcd-addons-101000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kube-apiserver-addons-101000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-addons-101000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-jpdlk                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-addons-101000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 metrics-server-844d8db974-vt8ml          100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 snapshot-controller-75bbb956b9-9qbz2     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 snapshot-controller-75bbb956b9-gsppf     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             370Mi (9%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node addons-101000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node addons-101000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node addons-101000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m                kubelet          Node addons-101000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m                kubelet          Node addons-101000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m                kubelet          Node addons-101000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                18m                kubelet          Node addons-101000 status is now: NodeReady
	  Normal  RegisteredNode           18m                node-controller  Node addons-101000 event: Registered Node addons-101000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] KASLR disabled due to lack of seed
	[  +0.640919] EINJ: EINJ table not found.
	[  +0.493985] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.044020] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000805] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Jul19 22:52] systemd-fstab-generator[483]: Ignoring "noauto" for root device
	[  +0.066120] systemd-fstab-generator[494]: Ignoring "noauto" for root device
	[  +0.415432] systemd-fstab-generator[754]: Ignoring "noauto" for root device
	[  +0.174029] systemd-fstab-generator[789]: Ignoring "noauto" for root device
	[  +0.074321] systemd-fstab-generator[800]: Ignoring "noauto" for root device
	[  +0.082963] systemd-fstab-generator[813]: Ignoring "noauto" for root device
	[  +1.234248] systemd-fstab-generator[971]: Ignoring "noauto" for root device
	[  +0.083333] systemd-fstab-generator[982]: Ignoring "noauto" for root device
	[  +0.083834] systemd-fstab-generator[993]: Ignoring "noauto" for root device
	[  +0.077345] systemd-fstab-generator[1004]: Ignoring "noauto" for root device
	[  +0.087881] systemd-fstab-generator[1044]: Ignoring "noauto" for root device
	[  +2.511224] systemd-fstab-generator[1142]: Ignoring "noauto" for root device
	[  +2.197027] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.212424] systemd-fstab-generator[1454]: Ignoring "noauto" for root device
	[  +5.140733] systemd-fstab-generator[2321]: Ignoring "noauto" for root device
	[ +14.969880] kauditd_printk_skb: 41 callbacks suppressed
	[  +1.331585] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +4.144611] kauditd_printk_skb: 47 callbacks suppressed
	[  +7.075011] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.120561] kauditd_printk_skb: 3 callbacks suppressed
	
	* 
	* ==> etcd [f959c7f626d6] <==
	* {"level":"info","ts":"2023-07-19T22:52:12.834Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","added-peer-id":"c46d288d2fcb0590","added-peer-peer-urls":["https://192.168.105.2:2380"]}
	{"level":"info","ts":"2023-07-19T22:52:13.557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2023-07-19T22:52:13.557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-07-19T22:52:13.557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2023-07-19T22:52:13.557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2023-07-19T22:52:13.557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-07-19T22:52:13.557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2023-07-19T22:52:13.557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-07-19T22:52:13.558Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-101000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-19T22:52:13.558Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-19T22:52:13.558Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-19T22:52:13.558Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-19T22:52:13.558Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-19T22:52:13.558Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-19T22:52:13.559Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-07-19T22:52:13.559Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-19T22:52:13.559Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-19T22:52:13.559Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-19T22:52:13.559Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-19T23:02:13.581Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":796}
	{"level":"info","ts":"2023-07-19T23:02:13.583Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":796,"took":"1.901103ms","hash":1438469743}
	{"level":"info","ts":"2023-07-19T23:02:13.583Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1438469743,"revision":796,"compact-revision":-1}
	{"level":"info","ts":"2023-07-19T23:07:13.591Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":947}
	{"level":"info","ts":"2023-07-19T23:07:13.593Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":947,"took":"1.234408ms","hash":3746673681}
	{"level":"info","ts":"2023-07-19T23:07:13.593Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3746673681,"revision":947,"compact-revision":796}
	
	* 
	* ==> gcp-auth [1cc3491bfa53] <==
	* 2023/07/19 22:52:48 GCP Auth Webhook started!
	
	* 
	* ==> kernel <==
	*  23:10:30 up 18 min,  0 users,  load average: 0.24, 0.30, 0.27
	Linux addons-101000 5.10.57 #1 SMP PREEMPT Fri Jul 14 22:49:12 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [5984dda0d68a] <==
	* I0719 22:57:14.240083       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 22:57:14.246806       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 22:57:14.246948       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 22:57:14.250136       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 22:58:14.167740       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 22:59:14.169392       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:00:14.169663       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:01:14.170626       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:02:14.171221       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:02:14.242338       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 23:02:14.242604       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 23:02:14.242891       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 23:02:14.242923       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 23:02:14.252138       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:03:14.170675       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:04:14.169449       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:05:14.171272       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:06:14.171498       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:07:14.168568       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:07:14.244513       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 23:07:14.245028       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 23:07:14.260797       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:08:14.169738       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:09:14.170808       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:10:14.170454       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [862babcc9993] <==
	* I0719 22:52:43.092574       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0719 22:52:43.110448       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:52:44.118612       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:52:44.212265       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0719 22:52:45.139777       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:52:45.152565       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0719 22:52:45.216322       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0719 22:52:45.218685       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0719 22:52:45.220523       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0719 22:52:45.220604       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0719 22:52:45.235773       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:52:46.163044       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:52:46.176901       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:52:46.183652       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:52:46.183863       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0719 22:52:46.193621       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:52:59.164951       1 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0719 22:52:59.165085       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0719 22:52:59.266614       1 shared_informer.go:318] Caches are synced for resource quota
	I0719 22:52:59.590599       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0719 22:52:59.691557       1 shared_informer.go:318] Caches are synced for garbage collector
	I0719 22:53:15.026654       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0719 22:53:15.048574       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0719 22:53:16.014227       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:53:16.037423       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	
	* 
	* ==> kube-proxy [19588c52e552] <==
	* I0719 22:52:31.940544       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0719 22:52:31.940603       1 server_others.go:110] "Detected node IP" address="192.168.105.2"
	I0719 22:52:31.940629       1 server_others.go:554] "Using iptables proxy"
	I0719 22:52:31.978237       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0719 22:52:31.978247       1 server_others.go:192] "Using iptables Proxier"
	I0719 22:52:31.978272       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 22:52:31.978547       1 server.go:658] "Version info" version="v1.27.3"
	I0719 22:52:31.978554       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 22:52:31.980358       1 config.go:188] "Starting service config controller"
	I0719 22:52:31.980401       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0719 22:52:31.980465       1 config.go:97] "Starting endpoint slice config controller"
	I0719 22:52:31.980482       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0719 22:52:31.980953       1 config.go:315] "Starting node config controller"
	I0719 22:52:31.980984       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0719 22:52:32.081146       1 shared_informer.go:318] Caches are synced for node config
	I0719 22:52:32.081155       1 shared_informer.go:318] Caches are synced for service config
	I0719 22:52:32.081163       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [c6c632dd083f] <==
	* W0719 22:52:14.250631       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 22:52:14.250639       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 22:52:14.250680       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 22:52:14.250689       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 22:52:14.250732       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 22:52:14.250739       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0719 22:52:14.250771       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 22:52:14.250778       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 22:52:14.250830       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 22:52:14.250837       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 22:52:14.250850       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 22:52:14.250875       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0719 22:52:14.250933       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 22:52:14.250977       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0719 22:52:14.251012       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 22:52:14.251046       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 22:52:14.251076       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 22:52:14.251088       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 22:52:15.080697       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 22:52:15.080736       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 22:52:15.086216       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 22:52:15.086240       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0719 22:52:15.262476       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 22:52:15.262526       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0719 22:52:15.546346       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-07-19 22:51:58 UTC, ends at Wed 2023-07-19 23:10:30 UTC. --
	Jul 19 23:05:16 addons-101000 kubelet[2341]: E0719 23:05:16.804146    2341 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 19 23:05:16 addons-101000 kubelet[2341]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 23:05:16 addons-101000 kubelet[2341]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 23:05:16 addons-101000 kubelet[2341]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 19 23:06:16 addons-101000 kubelet[2341]: E0719 23:06:16.804697    2341 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 19 23:06:16 addons-101000 kubelet[2341]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 23:06:16 addons-101000 kubelet[2341]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 23:06:16 addons-101000 kubelet[2341]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 19 23:07:16 addons-101000 kubelet[2341]: W0719 23:07:16.782703    2341 machine.go:65] Cannot read vendor id correctly, set empty.
	Jul 19 23:07:16 addons-101000 kubelet[2341]: E0719 23:07:16.808270    2341 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 19 23:07:16 addons-101000 kubelet[2341]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 23:07:16 addons-101000 kubelet[2341]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 23:07:16 addons-101000 kubelet[2341]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 19 23:08:16 addons-101000 kubelet[2341]: E0719 23:08:16.799739    2341 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 19 23:08:16 addons-101000 kubelet[2341]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 23:08:16 addons-101000 kubelet[2341]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 23:08:16 addons-101000 kubelet[2341]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 19 23:09:16 addons-101000 kubelet[2341]: E0719 23:09:16.799632    2341 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 19 23:09:16 addons-101000 kubelet[2341]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 23:09:16 addons-101000 kubelet[2341]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 23:09:16 addons-101000 kubelet[2341]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 19 23:10:16 addons-101000 kubelet[2341]: E0719 23:10:16.804319    2341 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 19 23:10:16 addons-101000 kubelet[2341]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 23:10:16 addons-101000 kubelet[2341]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 23:10:16 addons-101000 kubelet[2341]:  > table=nat chain=KUBE-KUBELET-CANARY
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-101000 -n addons-101000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-101000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (720.88s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-101000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Non-zero exit: kubectl --context addons-101000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: exit status 1 (35.610667ms)

                                                
                                                
** stderr ** 
	error: no matching resources found

                                                
                                                
** /stderr **
addons_test.go:184: failed waiting for ingress-nginx-controller : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-101000 -n addons-101000
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-101000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-744000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT |                     |
	|         | -p download-only-744000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-744000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT |                     |
	|         | -p download-only-744000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT | 19 Jul 23 15:51 PDT |
	| delete  | -p download-only-744000        | download-only-744000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT | 19 Jul 23 15:51 PDT |
	| delete  | -p download-only-744000        | download-only-744000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT | 19 Jul 23 15:51 PDT |
	| start   | --download-only -p             | binary-mirror-101000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT |                     |
	|         | binary-mirror-101000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49310         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-101000        | binary-mirror-101000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT | 19 Jul 23 15:51 PDT |
	| start   | -p addons-101000               | addons-101000        | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT | 19 Jul 23 15:58 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-101000        | jenkins | v1.31.0 | 19 Jul 23 16:10 PDT |                     |
	|         | addons-101000                  |                      |         |         |                     |                     |
	| addons  | addons-101000 addons           | addons-101000        | jenkins | v1.31.0 | 19 Jul 23 16:10 PDT | 19 Jul 23 16:10 PDT |
	|         | disable metrics-server         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/19 15:51:46
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 15:51:46.194297    1544 out.go:296] Setting OutFile to fd 1 ...
	I0719 15:51:46.194422    1544 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 15:51:46.194425    1544 out.go:309] Setting ErrFile to fd 2...
	I0719 15:51:46.194428    1544 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 15:51:46.194533    1544 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 15:51:46.195601    1544 out.go:303] Setting JSON to false
	I0719 15:51:46.210650    1544 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1277,"bootTime":1689805829,"procs":414,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 15:51:46.210718    1544 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 15:51:46.215551    1544 out.go:177] * [addons-101000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 15:51:46.222492    1544 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 15:51:46.226347    1544 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 15:51:46.222559    1544 notify.go:220] Checking for updates...
	I0719 15:51:46.229495    1544 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 15:51:46.232495    1544 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 15:51:46.235514    1544 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 15:51:46.238453    1544 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 15:51:46.241624    1544 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 15:51:46.245483    1544 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 15:51:46.252546    1544 start.go:298] selected driver: qemu2
	I0719 15:51:46.252550    1544 start.go:880] validating driver "qemu2" against <nil>
	I0719 15:51:46.252555    1544 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 15:51:46.254378    1544 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0719 15:51:46.257444    1544 out.go:177] * Automatically selected the socket_vmnet network
	I0719 15:51:46.260545    1544 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:51:46.260571    1544 cni.go:84] Creating CNI manager for ""
	I0719 15:51:46.260577    1544 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 15:51:46.260582    1544 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 15:51:46.260588    1544 start_flags.go:319] config:
	{Name:addons-101000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-101000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:c
ni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 15:51:46.264618    1544 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:51:46.272375    1544 out.go:177] * Starting control plane node addons-101000 in cluster addons-101000
	I0719 15:51:46.276444    1544 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 15:51:46.276475    1544 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 15:51:46.276490    1544 cache.go:57] Caching tarball of preloaded images
	I0719 15:51:46.276551    1544 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 15:51:46.276557    1544 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0719 15:51:46.276783    1544 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/config.json ...
	I0719 15:51:46.276796    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/config.json: {Name:mk5e2042adc5d3df20329816c5917e6964724b56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:51:46.277018    1544 start.go:365] acquiring machines lock for addons-101000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 15:51:46.277112    1544 start.go:369] acquired machines lock for "addons-101000" in 87.917µs
	I0719 15:51:46.277123    1544 start.go:93] Provisioning new machine with config: &{Name:addons-101000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-101000
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 15:51:46.277150    1544 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 15:51:46.284454    1544 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0719 15:51:46.605350    1544 start.go:159] libmachine.API.Create for "addons-101000" (driver="qemu2")
	I0719 15:51:46.605388    1544 client.go:168] LocalClient.Create starting
	I0719 15:51:46.605532    1544 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 15:51:46.811960    1544 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 15:51:46.895754    1544 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 15:51:47.292066    1544 main.go:141] libmachine: Creating SSH key...
	I0719 15:51:47.506919    1544 main.go:141] libmachine: Creating Disk image...
	I0719 15:51:47.506931    1544 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 15:51:47.507226    1544 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/disk.qcow2
	I0719 15:51:47.541355    1544 main.go:141] libmachine: STDOUT: 
	I0719 15:51:47.541387    1544 main.go:141] libmachine: STDERR: 
	I0719 15:51:47.541456    1544 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/disk.qcow2 +20000M
	I0719 15:51:47.548773    1544 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 15:51:47.548786    1544 main.go:141] libmachine: STDERR: 
	I0719 15:51:47.548801    1544 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/disk.qcow2
	I0719 15:51:47.548806    1544 main.go:141] libmachine: Starting QEMU VM...
	I0719 15:51:47.548843    1544 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:3a:02:96:05:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/disk.qcow2
	I0719 15:51:47.614276    1544 main.go:141] libmachine: STDOUT: 
	I0719 15:51:47.614314    1544 main.go:141] libmachine: STDERR: 
	I0719 15:51:47.614319    1544 main.go:141] libmachine: Attempt 0
	I0719 15:51:47.614337    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:49.616493    1544 main.go:141] libmachine: Attempt 1
	I0719 15:51:49.616586    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:51.618782    1544 main.go:141] libmachine: Attempt 2
	I0719 15:51:51.618840    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:53.620905    1544 main.go:141] libmachine: Attempt 3
	I0719 15:51:53.620918    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:55.622923    1544 main.go:141] libmachine: Attempt 4
	I0719 15:51:55.622935    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:57.624701    1544 main.go:141] libmachine: Attempt 5
	I0719 15:51:57.624722    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:59.626797    1544 main.go:141] libmachine: Attempt 6
	I0719 15:51:59.626823    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:59.626967    1544 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0719 15:51:59.626997    1544 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:3a:2:96:5:da ID:1,36:3a:2:96:5:da Lease:0x64b9ba8e}
	I0719 15:51:59.627004    1544 main.go:141] libmachine: Found match: 36:3a:2:96:5:da
	I0719 15:51:59.627013    1544 main.go:141] libmachine: IP: 192.168.105.2
	I0719 15:51:59.627019    1544 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0719 15:52:01.648988    1544 machine.go:88] provisioning docker machine ...
	I0719 15:52:01.649049    1544 buildroot.go:166] provisioning hostname "addons-101000"
	I0719 15:52:01.650570    1544 main.go:141] libmachine: Using SSH client type: native
	I0719 15:52:01.651376    1544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10258d170] 0x10258fbd0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0719 15:52:01.651395    1544 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-101000 && echo "addons-101000" | sudo tee /etc/hostname
	I0719 15:52:01.738896    1544 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-101000
	
	I0719 15:52:01.739019    1544 main.go:141] libmachine: Using SSH client type: native
	I0719 15:52:01.739522    1544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10258d170] 0x10258fbd0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0719 15:52:01.739541    1544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-101000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-101000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-101000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:52:01.809650    1544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:52:01.809665    1544 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15585-1056/.minikube CaCertPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15585-1056/.minikube}
	I0719 15:52:01.809675    1544 buildroot.go:174] setting up certificates
	I0719 15:52:01.809702    1544 provision.go:83] configureAuth start
	I0719 15:52:01.809710    1544 provision.go:138] copyHostCerts
	I0719 15:52:01.809873    1544 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15585-1056/.minikube/key.pem (1675 bytes)
	I0719 15:52:01.810191    1544 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.pem (1082 bytes)
	I0719 15:52:01.810327    1544 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15585-1056/.minikube/cert.pem (1123 bytes)
	I0719 15:52:01.810465    1544 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca-key.pem org=jenkins.addons-101000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-101000]
	I0719 15:52:01.879682    1544 provision.go:172] copyRemoteCerts
	I0719 15:52:01.879750    1544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:52:01.879766    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:01.912803    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:52:01.919846    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0719 15:52:01.926696    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 15:52:01.934065    1544 provision.go:86] duration metric: configureAuth took 124.357417ms
	I0719 15:52:01.934074    1544 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:52:01.934167    1544 config.go:182] Loaded profile config "addons-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 15:52:01.934205    1544 main.go:141] libmachine: Using SSH client type: native
	I0719 15:52:01.934418    1544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10258d170] 0x10258fbd0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0719 15:52:01.934423    1544 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 15:52:01.991251    1544 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 15:52:01.991259    1544 buildroot.go:70] root file system type: tmpfs
	I0719 15:52:01.991320    1544 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 15:52:01.991364    1544 main.go:141] libmachine: Using SSH client type: native
	I0719 15:52:01.991596    1544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10258d170] 0x10258fbd0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0719 15:52:01.991636    1544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 15:52:02.049859    1544 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 15:52:02.049895    1544 main.go:141] libmachine: Using SSH client type: native
	I0719 15:52:02.050139    1544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10258d170] 0x10258fbd0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0719 15:52:02.050148    1544 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 15:52:02.386931    1544 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 15:52:02.386943    1544 machine.go:91] provisioned docker machine in 737.934792ms
	I0719 15:52:02.386948    1544 client.go:171] LocalClient.Create took 15.78172825s
	I0719 15:52:02.386964    1544 start.go:167] duration metric: libmachine.API.Create for "addons-101000" took 15.781794083s
	I0719 15:52:02.386972    1544 start.go:300] post-start starting for "addons-101000" (driver="qemu2")
	I0719 15:52:02.386977    1544 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:52:02.387049    1544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:52:02.387060    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:02.417331    1544 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:52:02.418797    1544 info.go:137] Remote host: Buildroot 2021.02.12
	I0719 15:52:02.418804    1544 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15585-1056/.minikube/addons for local assets ...
	I0719 15:52:02.418866    1544 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15585-1056/.minikube/files for local assets ...
	I0719 15:52:02.418892    1544 start.go:303] post-start completed in 31.917459ms
	I0719 15:52:02.419241    1544 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/config.json ...
	I0719 15:52:02.419386    1544 start.go:128] duration metric: createHost completed in 16.142407666s
	I0719 15:52:02.419424    1544 main.go:141] libmachine: Using SSH client type: native
	I0719 15:52:02.419636    1544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10258d170] 0x10258fbd0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0719 15:52:02.419640    1544 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:52:02.473900    1544 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689807122.468027460
	
	I0719 15:52:02.473908    1544 fix.go:206] guest clock: 1689807122.468027460
	I0719 15:52:02.473913    1544 fix.go:219] Guest: 2023-07-19 15:52:02.46802746 -0700 PDT Remote: 2023-07-19 15:52:02.419389 -0700 PDT m=+16.243794293 (delta=48.63846ms)
	I0719 15:52:02.473924    1544 fix.go:190] guest clock delta is within tolerance: 48.63846ms
	I0719 15:52:02.473927    1544 start.go:83] releasing machines lock for "addons-101000", held for 16.196985625s
	I0719 15:52:02.474254    1544 ssh_runner.go:195] Run: cat /version.json
	I0719 15:52:02.474266    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:02.474283    1544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:52:02.474307    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:02.504138    1544 ssh_runner.go:195] Run: systemctl --version
	I0719 15:52:02.506807    1544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:52:02.547286    1544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:52:02.547337    1544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:52:02.552537    1544 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:52:02.552544    1544 start.go:466] detecting cgroup driver to use...
	I0719 15:52:02.552636    1544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:52:02.558292    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 15:52:02.561659    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 15:52:02.565184    1544 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 15:52:02.565214    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 15:52:02.568252    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 15:52:02.571117    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 15:52:02.574394    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 15:52:02.578183    1544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:52:02.581670    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 15:52:02.585324    1544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:52:02.588146    1544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:52:02.590822    1544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:02.668293    1544 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 15:52:02.674070    1544 start.go:466] detecting cgroup driver to use...
	I0719 15:52:02.674127    1544 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 15:52:02.680608    1544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:52:02.685038    1544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:52:02.692942    1544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:52:02.697559    1544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 15:52:02.702616    1544 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 15:52:02.743950    1544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 15:52:02.749347    1544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:52:02.754962    1544 ssh_runner.go:195] Run: which cri-dockerd
	I0719 15:52:02.756311    1544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 15:52:02.758805    1544 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 15:52:02.763719    1544 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 15:52:02.840623    1544 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 15:52:02.914223    1544 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 15:52:02.914238    1544 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0719 15:52:02.919565    1544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:02.997357    1544 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 15:52:04.154714    1544 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.157354834s)
	I0719 15:52:04.154783    1544 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 15:52:04.233603    1544 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 15:52:04.316105    1544 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 15:52:04.398081    1544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:04.476954    1544 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 15:52:04.483791    1544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:04.563625    1544 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0719 15:52:04.586684    1544 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 15:52:04.586782    1544 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 15:52:04.589057    1544 start.go:534] Will wait 60s for crictl version
	I0719 15:52:04.589109    1544 ssh_runner.go:195] Run: which crictl
	I0719 15:52:04.590411    1544 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:52:04.605474    1544 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0719 15:52:04.605558    1544 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 15:52:04.615366    1544 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 15:52:04.635830    1544 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.4 ...
	I0719 15:52:04.635983    1544 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0719 15:52:04.637460    1544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:52:04.641595    1544 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 15:52:04.641637    1544 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 15:52:04.650900    1544 docker.go:636] Got preloaded images: 
	I0719 15:52:04.650907    1544 docker.go:642] registry.k8s.io/kube-apiserver:v1.27.3 wasn't preloaded
	I0719 15:52:04.650940    1544 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 15:52:04.654168    1544 ssh_runner.go:195] Run: which lz4
	I0719 15:52:04.655512    1544 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 15:52:04.656801    1544 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 15:52:04.656815    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635922 bytes)
	I0719 15:52:05.915036    1544 docker.go:600] Took 1.259592 seconds to copy over tarball
	I0719 15:52:05.915105    1544 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 15:52:06.974144    1544 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.059035792s)
	I0719 15:52:06.974158    1544 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 15:52:06.989746    1544 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 15:52:06.993185    1544 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0719 15:52:06.998174    1544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:07.075272    1544 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 15:52:09.295448    1544 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.220180667s)
	I0719 15:52:09.295552    1544 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 15:52:09.301832    1544 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0719 15:52:09.301841    1544 cache_images.go:84] Images are preloaded, skipping loading
	I0719 15:52:09.301925    1544 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0719 15:52:09.309268    1544 cni.go:84] Creating CNI manager for ""
	I0719 15:52:09.309280    1544 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 15:52:09.309309    1544 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0719 15:52:09.309319    1544 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-101000 NodeName:addons-101000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 15:52:09.309384    1544 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-101000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:52:09.309419    1544 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-101000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:addons-101000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0719 15:52:09.309476    1544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0719 15:52:09.312676    1544 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:52:09.312711    1544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:52:09.315418    1544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0719 15:52:09.320330    1544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 15:52:09.325480    1544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0719 15:52:09.330778    1544 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0719 15:52:09.332164    1544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:52:09.335590    1544 certs.go:56] Setting up /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000 for IP: 192.168.105.2
	I0719 15:52:09.335612    1544 certs.go:190] acquiring lock for shared ca certs: {Name:mk57268b94adc82cb06ba056d8f0acecf538b87f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.335779    1544 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.key
	I0719 15:52:09.375531    1544 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.crt ...
	I0719 15:52:09.375537    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.crt: {Name:mk18dc73651ebb7586f5cc870528fe59bb3eaca5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.375716    1544 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.key ...
	I0719 15:52:09.375718    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.key: {Name:mkf4847c0170d0ed2e02012567d5849b7cdc3e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.375829    1544 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.key
	I0719 15:52:09.479964    1544 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.crt ...
	I0719 15:52:09.479968    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.crt: {Name:mk931f43b9aeac1a637bc02f03d26df5c2c21559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.480104    1544 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.key ...
	I0719 15:52:09.480107    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.key: {Name:mkbbced5a2200a63ea6918cadfce8d25c9e09696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.480228    1544 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.key
	I0719 15:52:09.480236    1544 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt with IP's: []
	I0719 15:52:09.550153    1544 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt ...
	I0719 15:52:09.550157    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt: {Name:mkfbd0ec0d392f0ad08f01bd61787ea0a90ba52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.550273    1544 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.key ...
	I0719 15:52:09.550276    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.key: {Name:mk8c74eed8437e78eaa33e4b6b240669ae86a824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.550378    1544 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.key.96055969
	I0719 15:52:09.550392    1544 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0719 15:52:09.700054    1544 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.crt.96055969 ...
	I0719 15:52:09.700063    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.crt.96055969: {Name:mkf03671886dbbbb632ec2e172f912e064d8e1ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.700299    1544 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.key.96055969 ...
	I0719 15:52:09.700303    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.key.96055969: {Name:mk303c08d6b543a4cd38e9de14800a408d1d2869 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.700416    1544 certs.go:337] copying /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.crt
	I0719 15:52:09.700598    1544 certs.go:341] copying /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.key
	I0719 15:52:09.700687    1544 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.key
	I0719 15:52:09.700696    1544 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.crt with IP's: []
	I0719 15:52:09.741490    1544 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.crt ...
	I0719 15:52:09.741493    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.crt: {Name:mk78bf5cf7588d5f6faf8ac273455bded2325b7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.741610    1544 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.key ...
	I0719 15:52:09.741617    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.key: {Name:mkd247533695e3682be8e4d6fb67fe0e52efd3f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.741840    1544 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 15:52:09.741864    1544 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:52:09.741886    1544 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:52:09.741911    1544 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/key.pem (1675 bytes)
	I0719 15:52:09.742194    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0719 15:52:09.749753    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 15:52:09.756925    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:52:09.764081    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 15:52:09.770653    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:52:09.777938    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 15:52:09.785429    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:52:09.792823    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:52:09.799580    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:52:09.806238    1544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:52:09.812452    1544 ssh_runner.go:195] Run: openssl version
	I0719 15:52:09.814304    1544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:52:09.817773    1544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:52:09.819362    1544 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 19 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:52:09.819381    1544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:52:09.821228    1544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:52:09.824165    1544 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0719 15:52:09.825567    1544 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0719 15:52:09.825608    1544 kubeadm.go:404] StartCluster: {Name:addons-101000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-101000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 15:52:09.825669    1544 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 15:52:09.831218    1544 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:52:09.834469    1544 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:52:09.837516    1544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:52:09.840473    1544 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:52:09.840487    1544 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 15:52:09.861549    1544 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0719 15:52:09.861581    1544 kubeadm.go:322] [preflight] Running pre-flight checks
	I0719 15:52:09.914251    1544 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 15:52:09.914308    1544 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 15:52:09.914371    1544 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 15:52:09.979760    1544 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 15:52:09.988950    1544 out.go:204]   - Generating certificates and keys ...
	I0719 15:52:09.988981    1544 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0719 15:52:09.989014    1544 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0719 15:52:10.135496    1544 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 15:52:10.224881    1544 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0719 15:52:10.328051    1544 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0719 15:52:10.529629    1544 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0719 15:52:10.721019    1544 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0719 15:52:10.721090    1544 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-101000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0719 15:52:10.787563    1544 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0719 15:52:10.787619    1544 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-101000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0719 15:52:10.835004    1544 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 15:52:10.905361    1544 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 15:52:10.998864    1544 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0719 15:52:10.998890    1544 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 15:52:11.030652    1544 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 15:52:11.128642    1544 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 15:52:11.289310    1544 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 15:52:11.400745    1544 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 15:52:11.407437    1544 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 15:52:11.407496    1544 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 15:52:11.407517    1544 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0719 15:52:11.489012    1544 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 15:52:11.495210    1544 out.go:204]   - Booting up control plane ...
	I0719 15:52:11.495279    1544 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 15:52:11.495324    1544 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 15:52:11.495356    1544 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 15:52:11.495394    1544 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 15:52:11.496331    1544 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 15:52:15.497794    1544 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.001180 seconds
	I0719 15:52:15.497922    1544 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 15:52:15.503484    1544 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 15:52:16.021124    1544 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 15:52:16.021263    1544 kubeadm.go:322] [mark-control-plane] Marking the node addons-101000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 15:52:16.527825    1544 kubeadm.go:322] [bootstrap-token] Using token: za2ad5.mzmzgft4t0cdmv0r
	I0719 15:52:16.534544    1544 out.go:204]   - Configuring RBAC rules ...
	I0719 15:52:16.534604    1544 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 15:52:16.535903    1544 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 15:52:16.539838    1544 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 15:52:16.541602    1544 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 15:52:16.542999    1544 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 15:52:16.544402    1544 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 15:52:16.549043    1544 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 15:52:16.720317    1544 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0719 15:52:16.937953    1544 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0719 15:52:16.938484    1544 kubeadm.go:322] 
	I0719 15:52:16.938519    1544 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0719 15:52:16.938527    1544 kubeadm.go:322] 
	I0719 15:52:16.938563    1544 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0719 15:52:16.938588    1544 kubeadm.go:322] 
	I0719 15:52:16.938601    1544 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0719 15:52:16.938634    1544 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 15:52:16.938667    1544 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 15:52:16.938672    1544 kubeadm.go:322] 
	I0719 15:52:16.938695    1544 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0719 15:52:16.938698    1544 kubeadm.go:322] 
	I0719 15:52:16.938723    1544 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 15:52:16.938730    1544 kubeadm.go:322] 
	I0719 15:52:16.938754    1544 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0719 15:52:16.938790    1544 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 15:52:16.938841    1544 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 15:52:16.938844    1544 kubeadm.go:322] 
	I0719 15:52:16.938885    1544 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 15:52:16.938940    1544 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0719 15:52:16.938947    1544 kubeadm.go:322] 
	I0719 15:52:16.938990    1544 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token za2ad5.mzmzgft4t0cdmv0r \
	I0719 15:52:16.939068    1544 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ac94190d213987345f5634b8d3500a45c06dcf10dacd514b79727b35e18f9f90 \
	I0719 15:52:16.939079    1544 kubeadm.go:322] 	--control-plane 
	I0719 15:52:16.939082    1544 kubeadm.go:322] 
	I0719 15:52:16.939124    1544 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0719 15:52:16.939129    1544 kubeadm.go:322] 
	I0719 15:52:16.939171    1544 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token za2ad5.mzmzgft4t0cdmv0r \
	I0719 15:52:16.939230    1544 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ac94190d213987345f5634b8d3500a45c06dcf10dacd514b79727b35e18f9f90 
	I0719 15:52:16.939287    1544 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 15:52:16.939293    1544 cni.go:84] Creating CNI manager for ""
	I0719 15:52:16.939300    1544 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 15:52:16.943663    1544 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 15:52:16.946263    1544 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 15:52:16.949272    1544 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0719 15:52:16.953970    1544 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 15:52:16.954045    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:16.954043    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=fd270bde4b7e946c995f2329996587ae45fe53d4 minikube.k8s.io/name=addons-101000 minikube.k8s.io/updated_at=2023_07_19T15_52_16_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:17.018155    1544 ops.go:34] apiserver oom_adj: -16
	I0719 15:52:17.018193    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:17.554061    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:18.053987    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:18.552765    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:19.054043    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:19.552096    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:20.054238    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:20.554269    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:21.054238    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:21.554235    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:22.054268    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:22.554252    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:23.054226    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:23.554175    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:24.054171    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:24.553977    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:25.054181    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:25.553942    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:26.053975    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:26.553905    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:27.053866    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:27.553341    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:28.052551    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:28.553332    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:29.053858    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:29.553880    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:30.052491    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:30.101801    1544 kubeadm.go:1081] duration metric: took 13.147921333s to wait for elevateKubeSystemPrivileges.
	I0719 15:52:30.101816    1544 kubeadm.go:406] StartCluster complete in 20.276428833s
	I0719 15:52:30.101824    1544 settings.go:142] acquiring lock: {Name:mk58631521ffd49c3231a31589bcae3549c3b53c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:30.101973    1544 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 15:52:30.102164    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/kubeconfig: {Name:mk508b0ad49e7803e8fd5dcb96b45d1248a097b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:30.102360    1544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0719 15:52:30.102400    1544 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0719 15:52:30.102441    1544 addons.go:69] Setting volumesnapshots=true in profile "addons-101000"
	I0719 15:52:30.102447    1544 addons.go:231] Setting addon volumesnapshots=true in "addons-101000"
	I0719 15:52:30.102465    1544 addons.go:69] Setting metrics-server=true in profile "addons-101000"
	I0719 15:52:30.102475    1544 addons.go:231] Setting addon metrics-server=true in "addons-101000"
	I0719 15:52:30.102477    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102506    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102510    1544 addons.go:69] Setting ingress=true in profile "addons-101000"
	I0719 15:52:30.102533    1544 addons.go:69] Setting ingress-dns=true in profile "addons-101000"
	I0719 15:52:30.102557    1544 addons.go:231] Setting addon ingress=true in "addons-101000"
	I0719 15:52:30.102537    1544 addons.go:69] Setting inspektor-gadget=true in profile "addons-101000"
	I0719 15:52:30.102572    1544 addons.go:231] Setting addon inspektor-gadget=true in "addons-101000"
	I0719 15:52:30.102587    1544 addons.go:69] Setting registry=true in profile "addons-101000"
	I0719 15:52:30.102609    1544 addons.go:231] Setting addon registry=true in "addons-101000"
	I0719 15:52:30.102627    1544 addons.go:231] Setting addon ingress-dns=true in "addons-101000"
	I0719 15:52:30.102650    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102671    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102679    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102685    1544 config.go:182] Loaded profile config "addons-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 15:52:30.102713    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102729    1544 addons.go:69] Setting storage-provisioner=true in profile "addons-101000"
	I0719 15:52:30.102734    1544 addons.go:231] Setting addon storage-provisioner=true in "addons-101000"
	I0719 15:52:30.102749    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102918    1544 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-101000"
	I0719 15:52:30.102930    1544 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-101000"
	I0719 15:52:30.102932    1544 addons.go:69] Setting cloud-spanner=true in profile "addons-101000"
	I0719 15:52:30.102942    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102946    1544 addons.go:231] Setting addon cloud-spanner=true in "addons-101000"
	I0719 15:52:30.102975    1544 host.go:66] Checking if "addons-101000" exists ...
	W0719 15:52:30.103028    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	W0719 15:52:30.103039    1544 addons.go:277] "addons-101000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0719 15:52:30.103042    1544 addons.go:467] Verifying addon ingress=true in "addons-101000"
	I0719 15:52:30.106383    1544 out.go:177] * Verifying ingress addon...
	W0719 15:52:30.103028    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	W0719 15:52:30.103167    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	W0719 15:52:30.103182    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	I0719 15:52:30.103217    1544 addons.go:69] Setting default-storageclass=true in profile "addons-101000"
	W0719 15:52:30.103227    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	W0719 15:52:30.103229    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	I0719 15:52:30.103236    1544 addons.go:69] Setting gcp-auth=true in profile "addons-101000"
	W0719 15:52:30.103647    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	W0719 15:52:30.115464    1544 addons.go:277] "addons-101000" is not running, setting registry=true and skipping enablement (err=<nil>)
	W0719 15:52:30.115470    1544 addons.go:277] "addons-101000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	W0719 15:52:30.115473    1544 addons.go:277] "addons-101000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W0719 15:52:30.115476    1544 addons.go:277] "addons-101000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0719 15:52:30.115487    1544 mustload.go:65] Loading cluster: addons-101000
	W0719 15:52:30.115490    1544 addons.go:277] "addons-101000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	I0719 15:52:30.115530    1544 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-101000"
	W0719 15:52:30.115554    1544 addons.go:277] "addons-101000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	I0719 15:52:30.115896    1544 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0719 15:52:30.118371    1544 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0719 15:52:30.125455    1544 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0719 15:52:30.125463    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0719 15:52:30.125471    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:30.122419    1544 addons.go:467] Verifying addon registry=true in "addons-101000"
	I0719 15:52:30.122435    1544 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.3
	I0719 15:52:30.133254    1544 out.go:177] * Verifying registry addon...
	I0719 15:52:30.122534    1544 config.go:182] Loaded profile config "addons-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 15:52:30.122472    1544 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-101000"
	I0719 15:52:30.126814    1544 addons.go:231] Setting addon default-storageclass=true in "addons-101000"
	I0719 15:52:30.127301    1544 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0719 15:52:30.129401    1544 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 15:52:30.136476    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 15:52:30.136487    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:30.139357    1544 out.go:177] * Verifying csi-hostpath-driver addon...
	I0719 15:52:30.136602    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.137003    1544 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0719 15:52:30.137526    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.146971    1544 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0719 15:52:30.147364    1544 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 15:52:30.147370    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 15:52:30.147377    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:30.154043    1544 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0719 15:52:30.155110    1544 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0719 15:52:30.167928    1544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0719 15:52:30.178534    1544 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0719 15:52:30.178543    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0719 15:52:30.183588    1544 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0719 15:52:30.183597    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0719 15:52:30.191691    1544 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0719 15:52:30.191699    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0719 15:52:30.203614    1544 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 15:52:30.203623    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0719 15:52:30.208695    1544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 15:52:30.219630    1544 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 15:52:30.219641    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 15:52:30.228005    1544 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0719 15:52:30.228015    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0719 15:52:30.232889    1544 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 15:52:30.232896    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0719 15:52:30.237389    1544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 15:52:30.293905    1544 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:52:30.293918    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 15:52:30.323074    1544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:52:30.620570    1544 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-101000" context rescaled to 1 replicas
	I0719 15:52:30.620588    1544 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 15:52:30.627928    1544 out.go:177] * Verifying Kubernetes components...
	I0719 15:52:30.631983    1544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:30.771514    1544 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	W0719 15:52:30.880793    1544 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0719 15:52:30.880817    1544 retry.go:31] will retry after 267.649411ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0719 15:52:30.944103    1544 addons.go:467] Verifying addon metrics-server=true in "addons-101000"
	I0719 15:52:30.944554    1544 node_ready.go:35] waiting up to 6m0s for node "addons-101000" to be "Ready" ...
	I0719 15:52:30.946971    1544 node_ready.go:49] node "addons-101000" has status "Ready":"True"
	I0719 15:52:30.946984    1544 node_ready.go:38] duration metric: took 2.412542ms waiting for node "addons-101000" to be "Ready" ...
	I0719 15:52:30.946988    1544 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:30.951440    1544 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-4dqmf" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:31.148622    1544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 15:52:32.960900    1544 pod_ready.go:102] pod "coredns-5d78c9869d-4dqmf" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:33.460293    1544 pod_ready.go:92] pod "coredns-5d78c9869d-4dqmf" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:33.460303    1544 pod_ready.go:81] duration metric: took 2.50887925s waiting for pod "coredns-5d78c9869d-4dqmf" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.460308    1544 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-knvd5" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.463445    1544 pod_ready.go:92] pod "coredns-5d78c9869d-knvd5" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:33.463453    1544 pod_ready.go:81] duration metric: took 3.140459ms waiting for pod "coredns-5d78c9869d-knvd5" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.463458    1544 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.466021    1544 pod_ready.go:92] pod "etcd-addons-101000" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:33.466025    1544 pod_ready.go:81] duration metric: took 2.564083ms waiting for pod "etcd-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.466029    1544 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.468619    1544 pod_ready.go:92] pod "kube-apiserver-addons-101000" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:33.468625    1544 pod_ready.go:81] duration metric: took 2.592875ms waiting for pod "kube-apiserver-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.468629    1544 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.471097    1544 pod_ready.go:92] pod "kube-controller-manager-addons-101000" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:33.471103    1544 pod_ready.go:81] duration metric: took 2.47075ms waiting for pod "kube-controller-manager-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.471106    1544 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jpdlk" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.691576    1544 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.542959542s)
	I0719 15:52:33.857829    1544 pod_ready.go:92] pod "kube-proxy-jpdlk" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:33.857838    1544 pod_ready.go:81] duration metric: took 386.731917ms waiting for pod "kube-proxy-jpdlk" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.857843    1544 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:34.259787    1544 pod_ready.go:92] pod "kube-scheduler-addons-101000" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:34.259798    1544 pod_ready.go:81] duration metric: took 401.956208ms waiting for pod "kube-scheduler-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:34.259802    1544 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-844d8db974-vt8ml" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:36.666794    1544 pod_ready.go:102] pod "metrics-server-844d8db974-vt8ml" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:36.752105    1544 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0719 15:52:36.752122    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:36.786675    1544 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0719 15:52:36.794559    1544 addons.go:231] Setting addon gcp-auth=true in "addons-101000"
	I0719 15:52:36.794583    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:36.795348    1544 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0719 15:52:36.795361    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:36.829668    1544 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0719 15:52:36.833634    1544 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0719 15:52:36.837615    1544 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0719 15:52:36.837621    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0719 15:52:36.843015    1544 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0719 15:52:36.843020    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0719 15:52:36.847964    1544 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0719 15:52:36.847971    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0719 15:52:36.853694    1544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0719 15:52:37.162175    1544 pod_ready.go:92] pod "metrics-server-844d8db974-vt8ml" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:37.162186    1544 pod_ready.go:81] duration metric: took 2.902411833s waiting for pod "metrics-server-844d8db974-vt8ml" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:37.162191    1544 pod_ready.go:38] duration metric: took 6.215265042s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:37.162200    1544 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:52:37.162261    1544 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:52:37.213333    1544 api_server.go:72] duration metric: took 6.592798333s to wait for apiserver process to appear ...
	I0719 15:52:37.213345    1544 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:52:37.213352    1544 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0719 15:52:37.213987    1544 addons.go:467] Verifying addon gcp-auth=true in "addons-101000"
	I0719 15:52:37.217265    1544 out.go:177] * Verifying gcp-auth addon...
	I0719 15:52:37.224564    1544 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0719 15:52:37.226285    1544 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0719 15:52:37.227830    1544 api_server.go:141] control plane version: v1.27.3
	I0719 15:52:37.227837    1544 api_server.go:131] duration metric: took 14.489167ms to wait for apiserver health ...
	I0719 15:52:37.227841    1544 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:52:37.231500    1544 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0719 15:52:37.231508    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:37.232481    1544 system_pods.go:59] 10 kube-system pods found
	I0719 15:52:37.232488    1544 system_pods.go:61] "coredns-5d78c9869d-4dqmf" [3335e5bb-cd1b-4f88-8662-5f52c470886e] Running
	I0719 15:52:37.232491    1544 system_pods.go:61] "coredns-5d78c9869d-knvd5" [8982b7a7-b4c0-4d8f-aaac-24b2aa68727d] Running
	I0719 15:52:37.232493    1544 system_pods.go:61] "etcd-addons-101000" [76e23cae-d673-4ce3-8c1a-28173c4ad7fb] Running
	I0719 15:52:37.232495    1544 system_pods.go:61] "kube-apiserver-addons-101000" [b5b978e3-f407-4753-974c-733ac90e64b3] Running
	I0719 15:52:37.232497    1544 system_pods.go:61] "kube-controller-manager-addons-101000" [d4160a3e-e6e8-450b-8484-7fe93b6a935c] Running
	I0719 15:52:37.232500    1544 system_pods.go:61] "kube-proxy-jpdlk" [e6fd914c-3322-4343-bfc3-e20cc61a9c1b] Running
	I0719 15:52:37.232503    1544 system_pods.go:61] "kube-scheduler-addons-101000" [22d2fcb7-43d8-4531-958e-506bd7b1d4b6] Running
	I0719 15:52:37.232506    1544 system_pods.go:61] "metrics-server-844d8db974-vt8ml" [a25120a0-a3f2-4a32-851b-21a7b451818f] Running
	I0719 15:52:37.232510    1544 system_pods.go:61] "snapshot-controller-75bbb956b9-9qbz2" [449931f4-f646-44cb-b8ae-3246b6e14db1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 15:52:37.232514    1544 system_pods.go:61] "snapshot-controller-75bbb956b9-gsppf" [358b5d0e-1f68-47f3-bc7d-ebbf387b2e40] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 15:52:37.232518    1544 system_pods.go:74] duration metric: took 4.674833ms to wait for pod list to return data ...
	I0719 15:52:37.232523    1544 default_sa.go:34] waiting for default service account to be created ...
	I0719 15:52:37.235194    1544 default_sa.go:45] found service account: "default"
	I0719 15:52:37.235203    1544 default_sa.go:55] duration metric: took 2.676875ms for default service account to be created ...
	I0719 15:52:37.235207    1544 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 15:52:37.262055    1544 system_pods.go:86] 10 kube-system pods found
	I0719 15:52:37.262063    1544 system_pods.go:89] "coredns-5d78c9869d-4dqmf" [3335e5bb-cd1b-4f88-8662-5f52c470886e] Running
	I0719 15:52:37.262066    1544 system_pods.go:89] "coredns-5d78c9869d-knvd5" [8982b7a7-b4c0-4d8f-aaac-24b2aa68727d] Running
	I0719 15:52:37.262069    1544 system_pods.go:89] "etcd-addons-101000" [76e23cae-d673-4ce3-8c1a-28173c4ad7fb] Running
	I0719 15:52:37.262071    1544 system_pods.go:89] "kube-apiserver-addons-101000" [b5b978e3-f407-4753-974c-733ac90e64b3] Running
	I0719 15:52:37.262073    1544 system_pods.go:89] "kube-controller-manager-addons-101000" [d4160a3e-e6e8-450b-8484-7fe93b6a935c] Running
	I0719 15:52:37.262075    1544 system_pods.go:89] "kube-proxy-jpdlk" [e6fd914c-3322-4343-bfc3-e20cc61a9c1b] Running
	I0719 15:52:37.262078    1544 system_pods.go:89] "kube-scheduler-addons-101000" [22d2fcb7-43d8-4531-958e-506bd7b1d4b6] Running
	I0719 15:52:37.262080    1544 system_pods.go:89] "metrics-server-844d8db974-vt8ml" [a25120a0-a3f2-4a32-851b-21a7b451818f] Running
	I0719 15:52:37.262085    1544 system_pods.go:89] "snapshot-controller-75bbb956b9-9qbz2" [449931f4-f646-44cb-b8ae-3246b6e14db1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 15:52:37.262089    1544 system_pods.go:89] "snapshot-controller-75bbb956b9-gsppf" [358b5d0e-1f68-47f3-bc7d-ebbf387b2e40] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 15:52:37.262093    1544 system_pods.go:126] duration metric: took 26.883625ms to wait for k8s-apps to be running ...
	I0719 15:52:37.262096    1544 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 15:52:37.262153    1544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:37.267299    1544 system_svc.go:56] duration metric: took 5.200291ms WaitForService to wait for kubelet.
	I0719 15:52:37.267305    1544 kubeadm.go:581] duration metric: took 6.646776125s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0719 15:52:37.267313    1544 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:52:37.460427    1544 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0719 15:52:37.460461    1544 node_conditions.go:123] node cpu capacity is 2
	I0719 15:52:37.460466    1544 node_conditions.go:105] duration metric: took 193.152542ms to run NodePressure ...
	I0719 15:52:37.460471    1544 start.go:228] waiting for startup goroutines ...
	I0719 15:52:37.735761    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:38.234684    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:38.735371    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:39.235765    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:39.735907    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:40.235373    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:40.734681    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:41.235287    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:41.735207    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:42.235545    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:42.734812    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:43.235286    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:43.736840    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:44.235205    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:44.738509    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:45.235248    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:45.735521    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:46.236102    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:46.735748    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:47.235790    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:47.736196    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:48.235182    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:48.735185    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:49.237494    1544 kapi.go:107] duration metric: took 12.013055167s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0719 15:52:49.242337    1544 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-101000 cluster.
	I0719 15:52:49.247318    1544 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0719 15:52:49.251640    1544 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0719 15:58:30.120713    1544 kapi.go:107] duration metric: took 6m0.008647875s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0719 15:58:30.121029    1544 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0719 15:58:30.144635    1544 kapi.go:107] duration metric: took 6m0.001552791s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0719 15:58:30.144676    1544 kapi.go:107] duration metric: took 6m0.011563667s to wait for kubernetes.io/minikube-addons=registry ...
	W0719 15:58:30.144755    1544 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	W0719 15:58:30.144795    1544 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0719 15:58:30.152624    1544 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, cloud-spanner, inspektor-gadget, default-storageclass, metrics-server, volumesnapshots, gcp-auth
	I0719 15:58:30.164713    1544 addons.go:502] enable addons completed in 6m0.066202625s: enabled=[ingress-dns storage-provisioner cloud-spanner inspektor-gadget default-storageclass metrics-server volumesnapshots gcp-auth]
	I0719 15:58:30.164763    1544 start.go:233] waiting for cluster config update ...
	I0719 15:58:30.164788    1544 start.go:242] writing updated cluster config ...
	I0719 15:58:30.169625    1544 ssh_runner.go:195] Run: rm -f paused
	I0719 15:58:30.237703    1544 start.go:578] kubectl: 1.27.2, cluster: 1.27.3 (minor skew: 0)
	I0719 15:58:30.241711    1544 out.go:177] * Done! kubectl is now configured to use "addons-101000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-07-19 22:51:58 UTC, ends at Wed 2023-07-19 23:12:09 UTC. --
	Jul 19 22:52:44 addons-101000 dockerd[1155]: time="2023-07-19T22:52:44.191384231Z" level=warning msg="cleaning up after shim disconnected" id=feffc055991a5f9040fb2443f3d7d925f26478390a78d13e35fcf50a7b2bd9b3 namespace=moby
	Jul 19 22:52:44 addons-101000 dockerd[1155]: time="2023-07-19T22:52:44.191405671Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 22:52:45 addons-101000 dockerd[1149]: time="2023-07-19T22:52:45.210876361Z" level=info msg="ignoring event" container=ff082e616338f9ee177498a150f356af0b22b9e031e5c62344154819899b6b1b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 22:52:45 addons-101000 dockerd[1155]: time="2023-07-19T22:52:45.211082068Z" level=info msg="shim disconnected" id=ff082e616338f9ee177498a150f356af0b22b9e031e5c62344154819899b6b1b namespace=moby
	Jul 19 22:52:45 addons-101000 dockerd[1155]: time="2023-07-19T22:52:45.211367980Z" level=warning msg="cleaning up after shim disconnected" id=ff082e616338f9ee177498a150f356af0b22b9e031e5c62344154819899b6b1b namespace=moby
	Jul 19 22:52:45 addons-101000 dockerd[1155]: time="2023-07-19T22:52:45.211378115Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 22:52:45 addons-101000 dockerd[1155]: time="2023-07-19T22:52:45.336810596Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 22:52:45 addons-101000 dockerd[1155]: time="2023-07-19T22:52:45.336856434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 22:52:45 addons-101000 dockerd[1155]: time="2023-07-19T22:52:45.336888007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 22:52:45 addons-101000 dockerd[1155]: time="2023-07-19T22:52:45.336900019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 22:52:45 addons-101000 cri-dockerd[1051]: time="2023-07-19T22:52:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0ee1896a26320c3f2a0276be91800f69e284fd28f8830c62afec6149f3e01934/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 19 22:52:45 addons-101000 dockerd[1149]: time="2023-07-19T22:52:45.710423798Z" level=warning msg="reference for unknown type: " digest="sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Jul 19 22:52:48 addons-101000 cri-dockerd[1051]: time="2023-07-19T22:52:48Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Jul 19 22:52:48 addons-101000 dockerd[1155]: time="2023-07-19T22:52:48.502093407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 22:52:48 addons-101000 dockerd[1155]: time="2023-07-19T22:52:48.502123765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 22:52:48 addons-101000 dockerd[1155]: time="2023-07-19T22:52:48.502132147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 22:52:48 addons-101000 dockerd[1155]: time="2023-07-19T22:52:48.502138277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 23:10:37 addons-101000 dockerd[1155]: time="2023-07-19T23:10:37.526761438Z" level=info msg="shim disconnected" id=1916a4d50b456c96dde6cfb8c9a0592f96f7f3a97e32a9be366ec87324697279 namespace=moby
	Jul 19 23:10:37 addons-101000 dockerd[1155]: time="2023-07-19T23:10:37.526811146Z" level=warning msg="cleaning up after shim disconnected" id=1916a4d50b456c96dde6cfb8c9a0592f96f7f3a97e32a9be366ec87324697279 namespace=moby
	Jul 19 23:10:37 addons-101000 dockerd[1155]: time="2023-07-19T23:10:37.526818938Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 23:10:37 addons-101000 dockerd[1149]: time="2023-07-19T23:10:37.527016893Z" level=info msg="ignoring event" container=1916a4d50b456c96dde6cfb8c9a0592f96f7f3a97e32a9be366ec87324697279 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 23:10:37 addons-101000 dockerd[1155]: time="2023-07-19T23:10:37.596743616Z" level=info msg="shim disconnected" id=6777dc48b461a60a564d4c6d897d03aa240f0f002485e017fbb91565bbbc234a namespace=moby
	Jul 19 23:10:37 addons-101000 dockerd[1155]: time="2023-07-19T23:10:37.596774491Z" level=warning msg="cleaning up after shim disconnected" id=6777dc48b461a60a564d4c6d897d03aa240f0f002485e017fbb91565bbbc234a namespace=moby
	Jul 19 23:10:37 addons-101000 dockerd[1155]: time="2023-07-19T23:10:37.596779074Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 23:10:37 addons-101000 dockerd[1149]: time="2023-07-19T23:10:37.596907072Z" level=info msg="ignoring event" container=6777dc48b461a60a564d4c6d897d03aa240f0f002485e017fbb91565bbbc234a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                     CREATED             STATE               NAME                         ATTEMPT             POD ID
	1cc3491bfa533       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf              19 minutes ago      Running             gcp-auth                     0                   0ee1896a26320
	2b9a59faa57f0       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   19 minutes ago      Running             volume-snapshot-controller   0                   4d989e719f82a
	d47989aa362eb       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   19 minutes ago      Running             volume-snapshot-controller   0                   16bf9bed7271f
	0df05b2c74afc       97e04611ad434                                                                                                             19 minutes ago      Running             coredns                      0                   49ab25acb4281
	19588c52e552d       fb73e92641fd5                                                                                                             19 minutes ago      Running             kube-proxy                   0                   4d2bba9dbbd12
	f959c7f626d6e       24bc64e911039                                                                                                             19 minutes ago      Running             etcd                         0                   e9214702e68a5
	c6c632dd083f2       bcb9e554eaab6                                                                                                             19 minutes ago      Running             kube-scheduler               0                   956f93b928e2f
	862babcc9993e       ab3683b584ae5                                                                                                             19 minutes ago      Running             kube-controller-manager      0                   81af0dc9e0f17
	5984dda0d68af       39dfb036b0986                                                                                                             19 minutes ago      Running             kube-apiserver               0                   b8a23dc6dd212
	
	* 
	* ==> coredns [0df05b2c74af] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38035 - 43333 "HINFO IN 3178013197050500524.6871022848785512211. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004455439s
	[INFO] 10.244.0.9:44781 - 2820 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000096121s
	[INFO] 10.244.0.9:49458 - 31580 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000042327s
	[INFO] 10.244.0.9:47092 - 42379 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000038365s
	[INFO] 10.244.0.9:51547 - 25508 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000044495s
	[INFO] 10.244.0.9:33952 - 7053 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000043995s
	[INFO] 10.244.0.9:46272 - 9283 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000029482s
	[INFO] 10.244.0.9:44297 - 58626 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001146116s
	[INFO] 10.244.0.9:36915 - 33133 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001094699s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-101000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-101000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd270bde4b7e946c995f2329996587ae45fe53d4
	                    minikube.k8s.io/name=addons-101000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_19T15_52_16_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Jul 2023 22:52:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-101000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Jul 2023 23:12:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Jul 2023 23:08:36 +0000   Wed, 19 Jul 2023 22:52:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Jul 2023 23:08:36 +0000   Wed, 19 Jul 2023 22:52:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Jul 2023 23:08:36 +0000   Wed, 19 Jul 2023 22:52:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Jul 2023 23:08:36 +0000   Wed, 19 Jul 2023 22:52:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-101000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 06f7171a2e3b478b8a006d3ed11bcad4
	  System UUID:                06f7171a2e3b478b8a006d3ed11bcad4
	  Boot ID:                    388cb244-002c-43f0-bc4d-d5cefb6c596c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-58478865f7-hfg7x                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 coredns-5d78c9869d-knvd5                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     19m
	  kube-system                 etcd-addons-101000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kube-apiserver-addons-101000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-addons-101000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-jpdlk                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-addons-101000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 snapshot-controller-75bbb956b9-9qbz2     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 snapshot-controller-75bbb956b9-gsppf     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node addons-101000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node addons-101000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node addons-101000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node addons-101000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node addons-101000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m                kubelet          Node addons-101000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                19m                kubelet          Node addons-101000 status is now: NodeReady
	  Normal  RegisteredNode           19m                node-controller  Node addons-101000 event: Registered Node addons-101000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] KASLR disabled due to lack of seed
	[  +0.640919] EINJ: EINJ table not found.
	[  +0.493985] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.044020] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000805] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Jul19 22:52] systemd-fstab-generator[483]: Ignoring "noauto" for root device
	[  +0.066120] systemd-fstab-generator[494]: Ignoring "noauto" for root device
	[  +0.415432] systemd-fstab-generator[754]: Ignoring "noauto" for root device
	[  +0.174029] systemd-fstab-generator[789]: Ignoring "noauto" for root device
	[  +0.074321] systemd-fstab-generator[800]: Ignoring "noauto" for root device
	[  +0.082963] systemd-fstab-generator[813]: Ignoring "noauto" for root device
	[  +1.234248] systemd-fstab-generator[971]: Ignoring "noauto" for root device
	[  +0.083333] systemd-fstab-generator[982]: Ignoring "noauto" for root device
	[  +0.083834] systemd-fstab-generator[993]: Ignoring "noauto" for root device
	[  +0.077345] systemd-fstab-generator[1004]: Ignoring "noauto" for root device
	[  +0.087881] systemd-fstab-generator[1044]: Ignoring "noauto" for root device
	[  +2.511224] systemd-fstab-generator[1142]: Ignoring "noauto" for root device
	[  +2.197027] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.212424] systemd-fstab-generator[1454]: Ignoring "noauto" for root device
	[  +5.140733] systemd-fstab-generator[2321]: Ignoring "noauto" for root device
	[ +14.969880] kauditd_printk_skb: 41 callbacks suppressed
	[  +1.331585] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +4.144611] kauditd_printk_skb: 47 callbacks suppressed
	[  +7.075011] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.120561] kauditd_printk_skb: 3 callbacks suppressed
	
	* 
	* ==> etcd [f959c7f626d6] <==
	* {"level":"info","ts":"2023-07-19T22:52:12.834Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","added-peer-id":"c46d288d2fcb0590","added-peer-peer-urls":["https://192.168.105.2:2380"]}
	{"level":"info","ts":"2023-07-19T22:52:13.557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2023-07-19T22:52:13.557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-07-19T22:52:13.557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2023-07-19T22:52:13.557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2023-07-19T22:52:13.557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-07-19T22:52:13.557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2023-07-19T22:52:13.557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-07-19T22:52:13.558Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-101000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-19T22:52:13.558Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-19T22:52:13.558Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-19T22:52:13.558Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-19T22:52:13.558Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-19T22:52:13.558Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-19T22:52:13.559Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-07-19T22:52:13.559Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-19T22:52:13.559Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-19T22:52:13.559Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-19T22:52:13.559Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-19T23:02:13.581Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":796}
	{"level":"info","ts":"2023-07-19T23:02:13.583Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":796,"took":"1.901103ms","hash":1438469743}
	{"level":"info","ts":"2023-07-19T23:02:13.583Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1438469743,"revision":796,"compact-revision":-1}
	{"level":"info","ts":"2023-07-19T23:07:13.591Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":947}
	{"level":"info","ts":"2023-07-19T23:07:13.593Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":947,"took":"1.234408ms","hash":3746673681}
	{"level":"info","ts":"2023-07-19T23:07:13.593Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3746673681,"revision":947,"compact-revision":796}
	
	* 
	* ==> gcp-auth [1cc3491bfa53] <==
	* 2023/07/19 22:52:48 GCP Auth Webhook started!
	
	* 
	* ==> kernel <==
	*  23:12:09 up 20 min,  0 users,  load average: 0.16, 0.27, 0.26
	Linux addons-101000 5.10.57 #1 SMP PREEMPT Fri Jul 14 22:49:12 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [5984dda0d68a] <==
	* I0719 23:02:14.242891       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 23:02:14.242923       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 23:02:14.252138       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:03:14.170675       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:04:14.169449       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:05:14.171272       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:06:14.171498       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:07:14.168568       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:07:14.244513       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 23:07:14.245028       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 23:07:14.260797       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:08:14.169738       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:09:14.170808       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:10:14.170454       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0719 23:10:38.206024       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0719 23:10:38.206055       1 handler_proxy.go:100] no RequestInfo found in the context
	E0719 23:10:38.206097       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 23:10:38.206106       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0719 23:10:38.206140       1 controller.go:132] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0719 23:11:38.206606       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0719 23:11:38.206668       1 handler_proxy.go:100] no RequestInfo found in the context
	E0719 23:11:38.206959       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 23:11:38.207005       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [862babcc9993] <==
	* I0719 22:52:43.092574       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0719 22:52:43.110448       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:52:44.118612       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:52:44.212265       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0719 22:52:45.139777       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:52:45.152565       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0719 22:52:45.216322       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0719 22:52:45.218685       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0719 22:52:45.220523       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0719 22:52:45.220604       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0719 22:52:45.235773       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:52:46.163044       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:52:46.176901       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:52:46.183652       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:52:46.183863       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0719 22:52:46.193621       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:52:59.164951       1 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0719 22:52:59.165085       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0719 22:52:59.266614       1 shared_informer.go:318] Caches are synced for resource quota
	I0719 22:52:59.590599       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0719 22:52:59.691557       1 shared_informer.go:318] Caches are synced for garbage collector
	I0719 22:53:15.026654       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0719 22:53:15.048574       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0719 22:53:16.014227       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:53:16.037423       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	
	* 
	* ==> kube-proxy [19588c52e552] <==
	* I0719 22:52:31.940544       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0719 22:52:31.940603       1 server_others.go:110] "Detected node IP" address="192.168.105.2"
	I0719 22:52:31.940629       1 server_others.go:554] "Using iptables proxy"
	I0719 22:52:31.978237       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0719 22:52:31.978247       1 server_others.go:192] "Using iptables Proxier"
	I0719 22:52:31.978272       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 22:52:31.978547       1 server.go:658] "Version info" version="v1.27.3"
	I0719 22:52:31.978554       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 22:52:31.980358       1 config.go:188] "Starting service config controller"
	I0719 22:52:31.980401       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0719 22:52:31.980465       1 config.go:97] "Starting endpoint slice config controller"
	I0719 22:52:31.980482       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0719 22:52:31.980953       1 config.go:315] "Starting node config controller"
	I0719 22:52:31.980984       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0719 22:52:32.081146       1 shared_informer.go:318] Caches are synced for node config
	I0719 22:52:32.081155       1 shared_informer.go:318] Caches are synced for service config
	I0719 22:52:32.081163       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [c6c632dd083f] <==
	* W0719 22:52:14.250631       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 22:52:14.250639       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 22:52:14.250680       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 22:52:14.250689       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 22:52:14.250732       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 22:52:14.250739       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0719 22:52:14.250771       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 22:52:14.250778       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 22:52:14.250830       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 22:52:14.250837       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 22:52:14.250850       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 22:52:14.250875       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0719 22:52:14.250933       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 22:52:14.250977       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0719 22:52:14.251012       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 22:52:14.251046       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 22:52:14.251076       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 22:52:14.251088       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 22:52:15.080697       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 22:52:15.080736       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 22:52:15.086216       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 22:52:15.086240       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0719 22:52:15.262476       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 22:52:15.262526       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0719 22:52:15.546346       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-07-19 22:51:58 UTC, ends at Wed 2023-07-19 23:12:10 UTC. --
	Jul 19 23:08:16 addons-101000 kubelet[2341]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 19 23:09:16 addons-101000 kubelet[2341]: E0719 23:09:16.799632    2341 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 19 23:09:16 addons-101000 kubelet[2341]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 23:09:16 addons-101000 kubelet[2341]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 23:09:16 addons-101000 kubelet[2341]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 19 23:10:16 addons-101000 kubelet[2341]: E0719 23:10:16.804319    2341 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 19 23:10:16 addons-101000 kubelet[2341]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 23:10:16 addons-101000 kubelet[2341]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 23:10:16 addons-101000 kubelet[2341]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 19 23:10:37 addons-101000 kubelet[2341]: I0719 23:10:37.730407    2341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a25120a0-a3f2-4a32-851b-21a7b451818f-tmp-dir\") pod \"a25120a0-a3f2-4a32-851b-21a7b451818f\" (UID: \"a25120a0-a3f2-4a32-851b-21a7b451818f\") "
	Jul 19 23:10:37 addons-101000 kubelet[2341]: I0719 23:10:37.730428    2341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcm69\" (UniqueName: \"kubernetes.io/projected/a25120a0-a3f2-4a32-851b-21a7b451818f-kube-api-access-fcm69\") pod \"a25120a0-a3f2-4a32-851b-21a7b451818f\" (UID: \"a25120a0-a3f2-4a32-851b-21a7b451818f\") "
	Jul 19 23:10:37 addons-101000 kubelet[2341]: W0719 23:10:37.730470    2341 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/a25120a0-a3f2-4a32-851b-21a7b451818f/volumes/kubernetes.io~empty-dir/tmp-dir: clearQuota called, but quotas disabled
	Jul 19 23:10:37 addons-101000 kubelet[2341]: I0719 23:10:37.730516    2341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a25120a0-a3f2-4a32-851b-21a7b451818f-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "a25120a0-a3f2-4a32-851b-21a7b451818f" (UID: "a25120a0-a3f2-4a32-851b-21a7b451818f"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 19 23:10:37 addons-101000 kubelet[2341]: I0719 23:10:37.732919    2341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a25120a0-a3f2-4a32-851b-21a7b451818f-kube-api-access-fcm69" (OuterVolumeSpecName: "kube-api-access-fcm69") pod "a25120a0-a3f2-4a32-851b-21a7b451818f" (UID: "a25120a0-a3f2-4a32-851b-21a7b451818f"). InnerVolumeSpecName "kube-api-access-fcm69". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 19 23:10:37 addons-101000 kubelet[2341]: I0719 23:10:37.832250    2341 reconciler_common.go:300] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a25120a0-a3f2-4a32-851b-21a7b451818f-tmp-dir\") on node \"addons-101000\" DevicePath \"\""
	Jul 19 23:10:37 addons-101000 kubelet[2341]: I0719 23:10:37.832267    2341 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fcm69\" (UniqueName: \"kubernetes.io/projected/a25120a0-a3f2-4a32-851b-21a7b451818f-kube-api-access-fcm69\") on node \"addons-101000\" DevicePath \"\""
	Jul 19 23:10:38 addons-101000 kubelet[2341]: I0719 23:10:38.541669    2341 scope.go:115] "RemoveContainer" containerID="1916a4d50b456c96dde6cfb8c9a0592f96f7f3a97e32a9be366ec87324697279"
	Jul 19 23:10:38 addons-101000 kubelet[2341]: I0719 23:10:38.572299    2341 scope.go:115] "RemoveContainer" containerID="1916a4d50b456c96dde6cfb8c9a0592f96f7f3a97e32a9be366ec87324697279"
	Jul 19 23:10:38 addons-101000 kubelet[2341]: E0719 23:10:38.573121    2341 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 1916a4d50b456c96dde6cfb8c9a0592f96f7f3a97e32a9be366ec87324697279" containerID="1916a4d50b456c96dde6cfb8c9a0592f96f7f3a97e32a9be366ec87324697279"
	Jul 19 23:10:38 addons-101000 kubelet[2341]: I0719 23:10:38.573165    2341 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:docker ID:1916a4d50b456c96dde6cfb8c9a0592f96f7f3a97e32a9be366ec87324697279} err="failed to get container status \"1916a4d50b456c96dde6cfb8c9a0592f96f7f3a97e32a9be366ec87324697279\": rpc error: code = Unknown desc = Error response from daemon: No such container: 1916a4d50b456c96dde6cfb8c9a0592f96f7f3a97e32a9be366ec87324697279"
	Jul 19 23:10:38 addons-101000 kubelet[2341]: I0719 23:10:38.811194    2341 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=a25120a0-a3f2-4a32-851b-21a7b451818f path="/var/lib/kubelet/pods/a25120a0-a3f2-4a32-851b-21a7b451818f/volumes"
	Jul 19 23:11:16 addons-101000 kubelet[2341]: E0719 23:11:16.803860    2341 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 19 23:11:16 addons-101000 kubelet[2341]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 23:11:16 addons-101000 kubelet[2341]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 23:11:16 addons-101000 kubelet[2341]:  > table=nat chain=KUBE-KUBELET-CANARY
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-101000 -n addons-101000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-101000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (0.73s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (480.83s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:329: TestAddons/parallel/InspektorGadget: WARNING: pod list for "gadget" "k8s-app=gadget" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:814: ***** TestAddons/parallel/InspektorGadget: pod "k8s-app=gadget" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:814: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-101000 -n addons-101000
addons_test.go:814: TestAddons/parallel/InspektorGadget: showing logs for failed pods as of 2023-07-19 16:18:36.5553 -0700 PDT m=+1644.554267960
addons_test.go:815: failed waiting for inspektor-gadget pod: k8s-app=gadget within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-101000 -n addons-101000
helpers_test.go:244: <<< TestAddons/parallel/InspektorGadget FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/InspektorGadget]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-101000 logs -n 25
helpers_test.go:252: TestAddons/parallel/InspektorGadget logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-744000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT |                     |
	|         | -p download-only-744000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-744000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT |                     |
	|         | -p download-only-744000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT | 19 Jul 23 15:51 PDT |
	| delete  | -p download-only-744000        | download-only-744000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT | 19 Jul 23 15:51 PDT |
	| delete  | -p download-only-744000        | download-only-744000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT | 19 Jul 23 15:51 PDT |
	| start   | --download-only -p             | binary-mirror-101000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT |                     |
	|         | binary-mirror-101000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49310         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-101000        | binary-mirror-101000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT | 19 Jul 23 15:51 PDT |
	| start   | -p addons-101000               | addons-101000        | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT | 19 Jul 23 15:58 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-101000        | jenkins | v1.31.0 | 19 Jul 23 16:10 PDT |                     |
	|         | addons-101000                  |                      |         |         |                     |                     |
	| addons  | addons-101000 addons           | addons-101000        | jenkins | v1.31.0 | 19 Jul 23 16:10 PDT | 19 Jul 23 16:10 PDT |
	|         | disable metrics-server         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-101000        | jenkins | v1.31.0 | 19 Jul 23 16:12 PDT | 19 Jul 23 16:12 PDT |
	|         | -p addons-101000               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/19 15:51:46
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 15:51:46.194297    1544 out.go:296] Setting OutFile to fd 1 ...
	I0719 15:51:46.194422    1544 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 15:51:46.194425    1544 out.go:309] Setting ErrFile to fd 2...
	I0719 15:51:46.194428    1544 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 15:51:46.194533    1544 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 15:51:46.195601    1544 out.go:303] Setting JSON to false
	I0719 15:51:46.210650    1544 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1277,"bootTime":1689805829,"procs":414,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 15:51:46.210718    1544 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 15:51:46.215551    1544 out.go:177] * [addons-101000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 15:51:46.222492    1544 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 15:51:46.226347    1544 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 15:51:46.222559    1544 notify.go:220] Checking for updates...
	I0719 15:51:46.229495    1544 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 15:51:46.232495    1544 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 15:51:46.235514    1544 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 15:51:46.238453    1544 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 15:51:46.241624    1544 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 15:51:46.245483    1544 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 15:51:46.252546    1544 start.go:298] selected driver: qemu2
	I0719 15:51:46.252550    1544 start.go:880] validating driver "qemu2" against <nil>
	I0719 15:51:46.252555    1544 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 15:51:46.254378    1544 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0719 15:51:46.257444    1544 out.go:177] * Automatically selected the socket_vmnet network
	I0719 15:51:46.260545    1544 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:51:46.260571    1544 cni.go:84] Creating CNI manager for ""
	I0719 15:51:46.260577    1544 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 15:51:46.260582    1544 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 15:51:46.260588    1544 start_flags.go:319] config:
	{Name:addons-101000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-101000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:c
ni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 15:51:46.264618    1544 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:51:46.272375    1544 out.go:177] * Starting control plane node addons-101000 in cluster addons-101000
	I0719 15:51:46.276444    1544 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 15:51:46.276475    1544 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 15:51:46.276490    1544 cache.go:57] Caching tarball of preloaded images
	I0719 15:51:46.276551    1544 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 15:51:46.276557    1544 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0719 15:51:46.276783    1544 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/config.json ...
	I0719 15:51:46.276796    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/config.json: {Name:mk5e2042adc5d3df20329816c5917e6964724b56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:51:46.277018    1544 start.go:365] acquiring machines lock for addons-101000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 15:51:46.277112    1544 start.go:369] acquired machines lock for "addons-101000" in 87.917µs
	I0719 15:51:46.277123    1544 start.go:93] Provisioning new machine with config: &{Name:addons-101000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-101000
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 15:51:46.277150    1544 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 15:51:46.284454    1544 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0719 15:51:46.605350    1544 start.go:159] libmachine.API.Create for "addons-101000" (driver="qemu2")
	I0719 15:51:46.605388    1544 client.go:168] LocalClient.Create starting
	I0719 15:51:46.605532    1544 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 15:51:46.811960    1544 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 15:51:46.895754    1544 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 15:51:47.292066    1544 main.go:141] libmachine: Creating SSH key...
	I0719 15:51:47.506919    1544 main.go:141] libmachine: Creating Disk image...
	I0719 15:51:47.506931    1544 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 15:51:47.507226    1544 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/disk.qcow2
	I0719 15:51:47.541355    1544 main.go:141] libmachine: STDOUT: 
	I0719 15:51:47.541387    1544 main.go:141] libmachine: STDERR: 
	I0719 15:51:47.541456    1544 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/disk.qcow2 +20000M
	I0719 15:51:47.548773    1544 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 15:51:47.548786    1544 main.go:141] libmachine: STDERR: 
	I0719 15:51:47.548801    1544 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/disk.qcow2
	I0719 15:51:47.548806    1544 main.go:141] libmachine: Starting QEMU VM...
	I0719 15:51:47.548843    1544 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:3a:02:96:05:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/disk.qcow2
	I0719 15:51:47.614276    1544 main.go:141] libmachine: STDOUT: 
	I0719 15:51:47.614314    1544 main.go:141] libmachine: STDERR: 
	I0719 15:51:47.614319    1544 main.go:141] libmachine: Attempt 0
	I0719 15:51:47.614337    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:49.616493    1544 main.go:141] libmachine: Attempt 1
	I0719 15:51:49.616586    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:51.618782    1544 main.go:141] libmachine: Attempt 2
	I0719 15:51:51.618840    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:53.620905    1544 main.go:141] libmachine: Attempt 3
	I0719 15:51:53.620918    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:55.622923    1544 main.go:141] libmachine: Attempt 4
	I0719 15:51:55.622935    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:57.624701    1544 main.go:141] libmachine: Attempt 5
	I0719 15:51:57.624722    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:59.626797    1544 main.go:141] libmachine: Attempt 6
	I0719 15:51:59.626823    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:59.626967    1544 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0719 15:51:59.626997    1544 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:3a:2:96:5:da ID:1,36:3a:2:96:5:da Lease:0x64b9ba8e}
	I0719 15:51:59.627004    1544 main.go:141] libmachine: Found match: 36:3a:2:96:5:da
	I0719 15:51:59.627013    1544 main.go:141] libmachine: IP: 192.168.105.2
	I0719 15:51:59.627019    1544 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0719 15:52:01.648988    1544 machine.go:88] provisioning docker machine ...
	I0719 15:52:01.649049    1544 buildroot.go:166] provisioning hostname "addons-101000"
	I0719 15:52:01.650570    1544 main.go:141] libmachine: Using SSH client type: native
	I0719 15:52:01.651376    1544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10258d170] 0x10258fbd0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0719 15:52:01.651395    1544 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-101000 && echo "addons-101000" | sudo tee /etc/hostname
	I0719 15:52:01.738896    1544 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-101000
	
	I0719 15:52:01.739019    1544 main.go:141] libmachine: Using SSH client type: native
	I0719 15:52:01.739522    1544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10258d170] 0x10258fbd0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0719 15:52:01.739541    1544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-101000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-101000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-101000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:52:01.809650    1544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:52:01.809665    1544 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15585-1056/.minikube CaCertPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15585-1056/.minikube}
	I0719 15:52:01.809675    1544 buildroot.go:174] setting up certificates
	I0719 15:52:01.809702    1544 provision.go:83] configureAuth start
	I0719 15:52:01.809710    1544 provision.go:138] copyHostCerts
	I0719 15:52:01.809873    1544 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15585-1056/.minikube/key.pem (1675 bytes)
	I0719 15:52:01.810191    1544 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.pem (1082 bytes)
	I0719 15:52:01.810327    1544 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15585-1056/.minikube/cert.pem (1123 bytes)
	I0719 15:52:01.810465    1544 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca-key.pem org=jenkins.addons-101000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-101000]
	I0719 15:52:01.879682    1544 provision.go:172] copyRemoteCerts
	I0719 15:52:01.879750    1544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:52:01.879766    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:01.912803    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:52:01.919846    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0719 15:52:01.926696    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 15:52:01.934065    1544 provision.go:86] duration metric: configureAuth took 124.357417ms
	I0719 15:52:01.934074    1544 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:52:01.934167    1544 config.go:182] Loaded profile config "addons-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 15:52:01.934205    1544 main.go:141] libmachine: Using SSH client type: native
	I0719 15:52:01.934418    1544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10258d170] 0x10258fbd0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0719 15:52:01.934423    1544 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 15:52:01.991251    1544 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 15:52:01.991259    1544 buildroot.go:70] root file system type: tmpfs
	I0719 15:52:01.991320    1544 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 15:52:01.991364    1544 main.go:141] libmachine: Using SSH client type: native
	I0719 15:52:01.991596    1544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10258d170] 0x10258fbd0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0719 15:52:01.991636    1544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 15:52:02.049859    1544 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 15:52:02.049895    1544 main.go:141] libmachine: Using SSH client type: native
	I0719 15:52:02.050139    1544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10258d170] 0x10258fbd0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0719 15:52:02.050148    1544 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 15:52:02.386931    1544 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 15:52:02.386943    1544 machine.go:91] provisioned docker machine in 737.934792ms
	I0719 15:52:02.386948    1544 client.go:171] LocalClient.Create took 15.78172825s
	I0719 15:52:02.386964    1544 start.go:167] duration metric: libmachine.API.Create for "addons-101000" took 15.781794083s
	I0719 15:52:02.386972    1544 start.go:300] post-start starting for "addons-101000" (driver="qemu2")
	I0719 15:52:02.386977    1544 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:52:02.387049    1544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:52:02.387060    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:02.417331    1544 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:52:02.418797    1544 info.go:137] Remote host: Buildroot 2021.02.12
	I0719 15:52:02.418804    1544 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15585-1056/.minikube/addons for local assets ...
	I0719 15:52:02.418866    1544 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15585-1056/.minikube/files for local assets ...
	I0719 15:52:02.418892    1544 start.go:303] post-start completed in 31.917459ms
	I0719 15:52:02.419241    1544 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/config.json ...
	I0719 15:52:02.419386    1544 start.go:128] duration metric: createHost completed in 16.142407666s
	I0719 15:52:02.419424    1544 main.go:141] libmachine: Using SSH client type: native
	I0719 15:52:02.419636    1544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10258d170] 0x10258fbd0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0719 15:52:02.419640    1544 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:52:02.473900    1544 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689807122.468027460
	
	I0719 15:52:02.473908    1544 fix.go:206] guest clock: 1689807122.468027460
	I0719 15:52:02.473913    1544 fix.go:219] Guest: 2023-07-19 15:52:02.46802746 -0700 PDT Remote: 2023-07-19 15:52:02.419389 -0700 PDT m=+16.243794293 (delta=48.63846ms)
	I0719 15:52:02.473924    1544 fix.go:190] guest clock delta is within tolerance: 48.63846ms
	I0719 15:52:02.473927    1544 start.go:83] releasing machines lock for "addons-101000", held for 16.196985625s
	I0719 15:52:02.474254    1544 ssh_runner.go:195] Run: cat /version.json
	I0719 15:52:02.474266    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:02.474283    1544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:52:02.474307    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:02.504138    1544 ssh_runner.go:195] Run: systemctl --version
	I0719 15:52:02.506807    1544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:52:02.547286    1544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:52:02.547337    1544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:52:02.552537    1544 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:52:02.552544    1544 start.go:466] detecting cgroup driver to use...
	I0719 15:52:02.552636    1544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:52:02.558292    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 15:52:02.561659    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 15:52:02.565184    1544 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 15:52:02.565214    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 15:52:02.568252    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 15:52:02.571117    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 15:52:02.574394    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 15:52:02.578183    1544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:52:02.581670    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 15:52:02.585324    1544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:52:02.588146    1544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:52:02.590822    1544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:02.668293    1544 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 15:52:02.674070    1544 start.go:466] detecting cgroup driver to use...
	I0719 15:52:02.674127    1544 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 15:52:02.680608    1544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:52:02.685038    1544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:52:02.692942    1544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:52:02.697559    1544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 15:52:02.702616    1544 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 15:52:02.743950    1544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 15:52:02.749347    1544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:52:02.754962    1544 ssh_runner.go:195] Run: which cri-dockerd
	I0719 15:52:02.756311    1544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 15:52:02.758805    1544 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 15:52:02.763719    1544 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 15:52:02.840623    1544 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 15:52:02.914223    1544 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 15:52:02.914238    1544 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0719 15:52:02.919565    1544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:02.997357    1544 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 15:52:04.154714    1544 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.157354834s)
	I0719 15:52:04.154783    1544 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 15:52:04.233603    1544 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 15:52:04.316105    1544 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 15:52:04.398081    1544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:04.476954    1544 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 15:52:04.483791    1544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:04.563625    1544 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0719 15:52:04.586684    1544 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 15:52:04.586782    1544 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 15:52:04.589057    1544 start.go:534] Will wait 60s for crictl version
	I0719 15:52:04.589109    1544 ssh_runner.go:195] Run: which crictl
	I0719 15:52:04.590411    1544 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:52:04.605474    1544 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0719 15:52:04.605558    1544 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 15:52:04.615366    1544 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 15:52:04.635830    1544 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.4 ...
	I0719 15:52:04.635983    1544 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0719 15:52:04.637460    1544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:52:04.641595    1544 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 15:52:04.641637    1544 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 15:52:04.650900    1544 docker.go:636] Got preloaded images: 
	I0719 15:52:04.650907    1544 docker.go:642] registry.k8s.io/kube-apiserver:v1.27.3 wasn't preloaded
	I0719 15:52:04.650940    1544 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 15:52:04.654168    1544 ssh_runner.go:195] Run: which lz4
	I0719 15:52:04.655512    1544 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 15:52:04.656801    1544 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 15:52:04.656815    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635922 bytes)
	I0719 15:52:05.915036    1544 docker.go:600] Took 1.259592 seconds to copy over tarball
	I0719 15:52:05.915105    1544 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 15:52:06.974144    1544 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.059035792s)
	I0719 15:52:06.974158    1544 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 15:52:06.989746    1544 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 15:52:06.993185    1544 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0719 15:52:06.998174    1544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:07.075272    1544 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 15:52:09.295448    1544 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.220180667s)
	I0719 15:52:09.295552    1544 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 15:52:09.301832    1544 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0719 15:52:09.301841    1544 cache_images.go:84] Images are preloaded, skipping loading
	I0719 15:52:09.301925    1544 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0719 15:52:09.309268    1544 cni.go:84] Creating CNI manager for ""
	I0719 15:52:09.309280    1544 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 15:52:09.309309    1544 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0719 15:52:09.309319    1544 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-101000 NodeName:addons-101000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 15:52:09.309384    1544 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-101000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:52:09.309419    1544 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-101000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:addons-101000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0719 15:52:09.309476    1544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0719 15:52:09.312676    1544 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:52:09.312711    1544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:52:09.315418    1544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0719 15:52:09.320330    1544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 15:52:09.325480    1544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0719 15:52:09.330778    1544 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0719 15:52:09.332164    1544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:52:09.335590    1544 certs.go:56] Setting up /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000 for IP: 192.168.105.2
	I0719 15:52:09.335612    1544 certs.go:190] acquiring lock for shared ca certs: {Name:mk57268b94adc82cb06ba056d8f0acecf538b87f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.335779    1544 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.key
	I0719 15:52:09.375531    1544 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.crt ...
	I0719 15:52:09.375537    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.crt: {Name:mk18dc73651ebb7586f5cc870528fe59bb3eaca5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.375716    1544 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.key ...
	I0719 15:52:09.375718    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.key: {Name:mkf4847c0170d0ed2e02012567d5849b7cdc3e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.375829    1544 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.key
	I0719 15:52:09.479964    1544 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.crt ...
	I0719 15:52:09.479968    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.crt: {Name:mk931f43b9aeac1a637bc02f03d26df5c2c21559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.480104    1544 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.key ...
	I0719 15:52:09.480107    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.key: {Name:mkbbced5a2200a63ea6918cadfce8d25c9e09696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.480228    1544 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.key
	I0719 15:52:09.480236    1544 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt with IP's: []
	I0719 15:52:09.550153    1544 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt ...
	I0719 15:52:09.550157    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt: {Name:mkfbd0ec0d392f0ad08f01bd61787ea0a90ba52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.550273    1544 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.key ...
	I0719 15:52:09.550276    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.key: {Name:mk8c74eed8437e78eaa33e4b6b240669ae86a824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.550378    1544 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.key.96055969
	I0719 15:52:09.550392    1544 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0719 15:52:09.700054    1544 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.crt.96055969 ...
	I0719 15:52:09.700063    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.crt.96055969: {Name:mkf03671886dbbbb632ec2e172f912e064d8e1ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.700299    1544 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.key.96055969 ...
	I0719 15:52:09.700303    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.key.96055969: {Name:mk303c08d6b543a4cd38e9de14800a408d1d2869 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.700416    1544 certs.go:337] copying /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.crt
	I0719 15:52:09.700598    1544 certs.go:341] copying /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.key
	I0719 15:52:09.700687    1544 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.key
	I0719 15:52:09.700696    1544 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.crt with IP's: []
	I0719 15:52:09.741490    1544 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.crt ...
	I0719 15:52:09.741493    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.crt: {Name:mk78bf5cf7588d5f6faf8ac273455bded2325b7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.741610    1544 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.key ...
	I0719 15:52:09.741617    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.key: {Name:mkd247533695e3682be8e4d6fb67fe0e52efd3f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.741840    1544 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 15:52:09.741864    1544 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:52:09.741886    1544 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:52:09.741911    1544 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/key.pem (1675 bytes)
	I0719 15:52:09.742194    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0719 15:52:09.749753    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 15:52:09.756925    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:52:09.764081    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 15:52:09.770653    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:52:09.777938    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 15:52:09.785429    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:52:09.792823    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:52:09.799580    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:52:09.806238    1544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:52:09.812452    1544 ssh_runner.go:195] Run: openssl version
	I0719 15:52:09.814304    1544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:52:09.817773    1544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:52:09.819362    1544 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 19 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:52:09.819381    1544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:52:09.821228    1544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:52:09.824165    1544 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0719 15:52:09.825567    1544 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0719 15:52:09.825608    1544 kubeadm.go:404] StartCluster: {Name:addons-101000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-101000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 15:52:09.825669    1544 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 15:52:09.831218    1544 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:52:09.834469    1544 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:52:09.837516    1544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:52:09.840473    1544 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:52:09.840487    1544 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 15:52:09.861549    1544 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0719 15:52:09.861581    1544 kubeadm.go:322] [preflight] Running pre-flight checks
	I0719 15:52:09.914251    1544 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 15:52:09.914308    1544 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 15:52:09.914371    1544 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 15:52:09.979760    1544 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 15:52:09.988950    1544 out.go:204]   - Generating certificates and keys ...
	I0719 15:52:09.988981    1544 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0719 15:52:09.989014    1544 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0719 15:52:10.135496    1544 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 15:52:10.224881    1544 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0719 15:52:10.328051    1544 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0719 15:52:10.529629    1544 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0719 15:52:10.721019    1544 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0719 15:52:10.721090    1544 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-101000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0719 15:52:10.787563    1544 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0719 15:52:10.787619    1544 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-101000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0719 15:52:10.835004    1544 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 15:52:10.905361    1544 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 15:52:10.998864    1544 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0719 15:52:10.998890    1544 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 15:52:11.030652    1544 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 15:52:11.128642    1544 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 15:52:11.289310    1544 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 15:52:11.400745    1544 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 15:52:11.407437    1544 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 15:52:11.407496    1544 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 15:52:11.407517    1544 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0719 15:52:11.489012    1544 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 15:52:11.495210    1544 out.go:204]   - Booting up control plane ...
	I0719 15:52:11.495279    1544 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 15:52:11.495324    1544 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 15:52:11.495356    1544 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 15:52:11.495394    1544 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 15:52:11.496331    1544 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 15:52:15.497794    1544 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.001180 seconds
	I0719 15:52:15.497922    1544 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 15:52:15.503484    1544 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 15:52:16.021124    1544 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 15:52:16.021263    1544 kubeadm.go:322] [mark-control-plane] Marking the node addons-101000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 15:52:16.527825    1544 kubeadm.go:322] [bootstrap-token] Using token: za2ad5.mzmzgft4t0cdmv0r
	I0719 15:52:16.534544    1544 out.go:204]   - Configuring RBAC rules ...
	I0719 15:52:16.534604    1544 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 15:52:16.535903    1544 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 15:52:16.539838    1544 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 15:52:16.541602    1544 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 15:52:16.542999    1544 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 15:52:16.544402    1544 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 15:52:16.549043    1544 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 15:52:16.720317    1544 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0719 15:52:16.937953    1544 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0719 15:52:16.938484    1544 kubeadm.go:322] 
	I0719 15:52:16.938519    1544 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0719 15:52:16.938527    1544 kubeadm.go:322] 
	I0719 15:52:16.938563    1544 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0719 15:52:16.938588    1544 kubeadm.go:322] 
	I0719 15:52:16.938601    1544 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0719 15:52:16.938634    1544 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 15:52:16.938667    1544 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 15:52:16.938672    1544 kubeadm.go:322] 
	I0719 15:52:16.938695    1544 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0719 15:52:16.938698    1544 kubeadm.go:322] 
	I0719 15:52:16.938723    1544 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 15:52:16.938730    1544 kubeadm.go:322] 
	I0719 15:52:16.938754    1544 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0719 15:52:16.938790    1544 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 15:52:16.938841    1544 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 15:52:16.938844    1544 kubeadm.go:322] 
	I0719 15:52:16.938885    1544 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 15:52:16.938940    1544 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0719 15:52:16.938947    1544 kubeadm.go:322] 
	I0719 15:52:16.938990    1544 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token za2ad5.mzmzgft4t0cdmv0r \
	I0719 15:52:16.939068    1544 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ac94190d213987345f5634b8d3500a45c06dcf10dacd514b79727b35e18f9f90 \
	I0719 15:52:16.939079    1544 kubeadm.go:322] 	--control-plane 
	I0719 15:52:16.939082    1544 kubeadm.go:322] 
	I0719 15:52:16.939124    1544 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0719 15:52:16.939129    1544 kubeadm.go:322] 
	I0719 15:52:16.939171    1544 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token za2ad5.mzmzgft4t0cdmv0r \
	I0719 15:52:16.939230    1544 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ac94190d213987345f5634b8d3500a45c06dcf10dacd514b79727b35e18f9f90 
	I0719 15:52:16.939287    1544 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 15:52:16.939293    1544 cni.go:84] Creating CNI manager for ""
	I0719 15:52:16.939300    1544 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 15:52:16.943663    1544 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 15:52:16.946263    1544 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 15:52:16.949272    1544 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0719 15:52:16.953970    1544 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 15:52:16.954045    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:16.954043    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=fd270bde4b7e946c995f2329996587ae45fe53d4 minikube.k8s.io/name=addons-101000 minikube.k8s.io/updated_at=2023_07_19T15_52_16_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:17.018155    1544 ops.go:34] apiserver oom_adj: -16
	I0719 15:52:17.018193    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:17.554061    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:18.053987    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:18.552765    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:19.054043    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:19.552096    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:20.054238    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:20.554269    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:21.054238    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:21.554235    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:22.054268    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:22.554252    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:23.054226    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:23.554175    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:24.054171    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:24.553977    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:25.054181    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:25.553942    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:26.053975    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:26.553905    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:27.053866    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:27.553341    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:28.052551    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:28.553332    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:29.053858    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:29.553880    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:30.052491    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:30.101801    1544 kubeadm.go:1081] duration metric: took 13.147921333s to wait for elevateKubeSystemPrivileges.
	I0719 15:52:30.101816    1544 kubeadm.go:406] StartCluster complete in 20.276428833s
	I0719 15:52:30.101824    1544 settings.go:142] acquiring lock: {Name:mk58631521ffd49c3231a31589bcae3549c3b53c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:30.101973    1544 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 15:52:30.102164    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/kubeconfig: {Name:mk508b0ad49e7803e8fd5dcb96b45d1248a097b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:30.102360    1544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0719 15:52:30.102400    1544 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0719 15:52:30.102441    1544 addons.go:69] Setting volumesnapshots=true in profile "addons-101000"
	I0719 15:52:30.102447    1544 addons.go:231] Setting addon volumesnapshots=true in "addons-101000"
	I0719 15:52:30.102465    1544 addons.go:69] Setting metrics-server=true in profile "addons-101000"
	I0719 15:52:30.102475    1544 addons.go:231] Setting addon metrics-server=true in "addons-101000"
	I0719 15:52:30.102477    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102506    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102510    1544 addons.go:69] Setting ingress=true in profile "addons-101000"
	I0719 15:52:30.102533    1544 addons.go:69] Setting ingress-dns=true in profile "addons-101000"
	I0719 15:52:30.102557    1544 addons.go:231] Setting addon ingress=true in "addons-101000"
	I0719 15:52:30.102537    1544 addons.go:69] Setting inspektor-gadget=true in profile "addons-101000"
	I0719 15:52:30.102572    1544 addons.go:231] Setting addon inspektor-gadget=true in "addons-101000"
	I0719 15:52:30.102587    1544 addons.go:69] Setting registry=true in profile "addons-101000"
	I0719 15:52:30.102609    1544 addons.go:231] Setting addon registry=true in "addons-101000"
	I0719 15:52:30.102627    1544 addons.go:231] Setting addon ingress-dns=true in "addons-101000"
	I0719 15:52:30.102650    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102671    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102679    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102685    1544 config.go:182] Loaded profile config "addons-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 15:52:30.102713    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102729    1544 addons.go:69] Setting storage-provisioner=true in profile "addons-101000"
	I0719 15:52:30.102734    1544 addons.go:231] Setting addon storage-provisioner=true in "addons-101000"
	I0719 15:52:30.102749    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102918    1544 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-101000"
	I0719 15:52:30.102930    1544 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-101000"
	I0719 15:52:30.102932    1544 addons.go:69] Setting cloud-spanner=true in profile "addons-101000"
	I0719 15:52:30.102942    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102946    1544 addons.go:231] Setting addon cloud-spanner=true in "addons-101000"
	I0719 15:52:30.102975    1544 host.go:66] Checking if "addons-101000" exists ...
	W0719 15:52:30.103028    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	W0719 15:52:30.103039    1544 addons.go:277] "addons-101000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0719 15:52:30.103042    1544 addons.go:467] Verifying addon ingress=true in "addons-101000"
	I0719 15:52:30.106383    1544 out.go:177] * Verifying ingress addon...
	W0719 15:52:30.103028    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	W0719 15:52:30.103167    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	W0719 15:52:30.103182    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	I0719 15:52:30.103217    1544 addons.go:69] Setting default-storageclass=true in profile "addons-101000"
	W0719 15:52:30.103227    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	W0719 15:52:30.103229    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	I0719 15:52:30.103236    1544 addons.go:69] Setting gcp-auth=true in profile "addons-101000"
	W0719 15:52:30.103647    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	W0719 15:52:30.115464    1544 addons.go:277] "addons-101000" is not running, setting registry=true and skipping enablement (err=<nil>)
	W0719 15:52:30.115470    1544 addons.go:277] "addons-101000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	W0719 15:52:30.115473    1544 addons.go:277] "addons-101000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W0719 15:52:30.115476    1544 addons.go:277] "addons-101000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0719 15:52:30.115487    1544 mustload.go:65] Loading cluster: addons-101000
	W0719 15:52:30.115490    1544 addons.go:277] "addons-101000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	I0719 15:52:30.115530    1544 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-101000"
	W0719 15:52:30.115554    1544 addons.go:277] "addons-101000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	I0719 15:52:30.115896    1544 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0719 15:52:30.118371    1544 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0719 15:52:30.125455    1544 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0719 15:52:30.125463    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0719 15:52:30.125471    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:30.122419    1544 addons.go:467] Verifying addon registry=true in "addons-101000"
	I0719 15:52:30.122435    1544 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.3
	I0719 15:52:30.133254    1544 out.go:177] * Verifying registry addon...
	I0719 15:52:30.122534    1544 config.go:182] Loaded profile config "addons-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 15:52:30.122472    1544 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-101000"
	I0719 15:52:30.126814    1544 addons.go:231] Setting addon default-storageclass=true in "addons-101000"
	I0719 15:52:30.127301    1544 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0719 15:52:30.129401    1544 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 15:52:30.136476    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 15:52:30.136487    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:30.139357    1544 out.go:177] * Verifying csi-hostpath-driver addon...
	I0719 15:52:30.136602    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.137003    1544 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0719 15:52:30.137526    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.146971    1544 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0719 15:52:30.147364    1544 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 15:52:30.147370    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 15:52:30.147377    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:30.154043    1544 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0719 15:52:30.155110    1544 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0719 15:52:30.167928    1544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0719 15:52:30.178534    1544 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0719 15:52:30.178543    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0719 15:52:30.183588    1544 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0719 15:52:30.183597    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0719 15:52:30.191691    1544 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0719 15:52:30.191699    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0719 15:52:30.203614    1544 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 15:52:30.203623    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0719 15:52:30.208695    1544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 15:52:30.219630    1544 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 15:52:30.219641    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 15:52:30.228005    1544 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0719 15:52:30.228015    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0719 15:52:30.232889    1544 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 15:52:30.232896    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0719 15:52:30.237389    1544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 15:52:30.293905    1544 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:52:30.293918    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 15:52:30.323074    1544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:52:30.620570    1544 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-101000" context rescaled to 1 replicas
	I0719 15:52:30.620588    1544 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 15:52:30.627928    1544 out.go:177] * Verifying Kubernetes components...
	I0719 15:52:30.631983    1544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:30.771514    1544 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	W0719 15:52:30.880793    1544 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0719 15:52:30.880817    1544 retry.go:31] will retry after 267.649411ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0719 15:52:30.944103    1544 addons.go:467] Verifying addon metrics-server=true in "addons-101000"
	I0719 15:52:30.944554    1544 node_ready.go:35] waiting up to 6m0s for node "addons-101000" to be "Ready" ...
	I0719 15:52:30.946971    1544 node_ready.go:49] node "addons-101000" has status "Ready":"True"
	I0719 15:52:30.946984    1544 node_ready.go:38] duration metric: took 2.412542ms waiting for node "addons-101000" to be "Ready" ...
	I0719 15:52:30.946988    1544 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:30.951440    1544 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-4dqmf" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:31.148622    1544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 15:52:32.960900    1544 pod_ready.go:102] pod "coredns-5d78c9869d-4dqmf" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:33.460293    1544 pod_ready.go:92] pod "coredns-5d78c9869d-4dqmf" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:33.460303    1544 pod_ready.go:81] duration metric: took 2.50887925s waiting for pod "coredns-5d78c9869d-4dqmf" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.460308    1544 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-knvd5" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.463445    1544 pod_ready.go:92] pod "coredns-5d78c9869d-knvd5" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:33.463453    1544 pod_ready.go:81] duration metric: took 3.140459ms waiting for pod "coredns-5d78c9869d-knvd5" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.463458    1544 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.466021    1544 pod_ready.go:92] pod "etcd-addons-101000" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:33.466025    1544 pod_ready.go:81] duration metric: took 2.564083ms waiting for pod "etcd-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.466029    1544 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.468619    1544 pod_ready.go:92] pod "kube-apiserver-addons-101000" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:33.468625    1544 pod_ready.go:81] duration metric: took 2.592875ms waiting for pod "kube-apiserver-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.468629    1544 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.471097    1544 pod_ready.go:92] pod "kube-controller-manager-addons-101000" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:33.471103    1544 pod_ready.go:81] duration metric: took 2.47075ms waiting for pod "kube-controller-manager-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.471106    1544 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jpdlk" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.691576    1544 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.542959542s)
	I0719 15:52:33.857829    1544 pod_ready.go:92] pod "kube-proxy-jpdlk" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:33.857838    1544 pod_ready.go:81] duration metric: took 386.731917ms waiting for pod "kube-proxy-jpdlk" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.857843    1544 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:34.259787    1544 pod_ready.go:92] pod "kube-scheduler-addons-101000" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:34.259798    1544 pod_ready.go:81] duration metric: took 401.956208ms waiting for pod "kube-scheduler-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:34.259802    1544 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-844d8db974-vt8ml" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:36.666794    1544 pod_ready.go:102] pod "metrics-server-844d8db974-vt8ml" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:36.752105    1544 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0719 15:52:36.752122    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:36.786675    1544 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0719 15:52:36.794559    1544 addons.go:231] Setting addon gcp-auth=true in "addons-101000"
	I0719 15:52:36.794583    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:36.795348    1544 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0719 15:52:36.795361    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:36.829668    1544 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0719 15:52:36.833634    1544 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0719 15:52:36.837615    1544 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0719 15:52:36.837621    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0719 15:52:36.843015    1544 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0719 15:52:36.843020    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0719 15:52:36.847964    1544 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0719 15:52:36.847971    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0719 15:52:36.853694    1544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0719 15:52:37.162175    1544 pod_ready.go:92] pod "metrics-server-844d8db974-vt8ml" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:37.162186    1544 pod_ready.go:81] duration metric: took 2.902411833s waiting for pod "metrics-server-844d8db974-vt8ml" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:37.162191    1544 pod_ready.go:38] duration metric: took 6.215265042s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:37.162200    1544 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:52:37.162261    1544 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:52:37.213333    1544 api_server.go:72] duration metric: took 6.592798333s to wait for apiserver process to appear ...
	I0719 15:52:37.213345    1544 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:52:37.213352    1544 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0719 15:52:37.213987    1544 addons.go:467] Verifying addon gcp-auth=true in "addons-101000"
	I0719 15:52:37.217265    1544 out.go:177] * Verifying gcp-auth addon...
	I0719 15:52:37.224564    1544 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0719 15:52:37.226285    1544 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0719 15:52:37.227830    1544 api_server.go:141] control plane version: v1.27.3
	I0719 15:52:37.227837    1544 api_server.go:131] duration metric: took 14.489167ms to wait for apiserver health ...
	I0719 15:52:37.227841    1544 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:52:37.231500    1544 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0719 15:52:37.231508    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:37.232481    1544 system_pods.go:59] 10 kube-system pods found
	I0719 15:52:37.232488    1544 system_pods.go:61] "coredns-5d78c9869d-4dqmf" [3335e5bb-cd1b-4f88-8662-5f52c470886e] Running
	I0719 15:52:37.232491    1544 system_pods.go:61] "coredns-5d78c9869d-knvd5" [8982b7a7-b4c0-4d8f-aaac-24b2aa68727d] Running
	I0719 15:52:37.232493    1544 system_pods.go:61] "etcd-addons-101000" [76e23cae-d673-4ce3-8c1a-28173c4ad7fb] Running
	I0719 15:52:37.232495    1544 system_pods.go:61] "kube-apiserver-addons-101000" [b5b978e3-f407-4753-974c-733ac90e64b3] Running
	I0719 15:52:37.232497    1544 system_pods.go:61] "kube-controller-manager-addons-101000" [d4160a3e-e6e8-450b-8484-7fe93b6a935c] Running
	I0719 15:52:37.232500    1544 system_pods.go:61] "kube-proxy-jpdlk" [e6fd914c-3322-4343-bfc3-e20cc61a9c1b] Running
	I0719 15:52:37.232503    1544 system_pods.go:61] "kube-scheduler-addons-101000" [22d2fcb7-43d8-4531-958e-506bd7b1d4b6] Running
	I0719 15:52:37.232506    1544 system_pods.go:61] "metrics-server-844d8db974-vt8ml" [a25120a0-a3f2-4a32-851b-21a7b451818f] Running
	I0719 15:52:37.232510    1544 system_pods.go:61] "snapshot-controller-75bbb956b9-9qbz2" [449931f4-f646-44cb-b8ae-3246b6e14db1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 15:52:37.232514    1544 system_pods.go:61] "snapshot-controller-75bbb956b9-gsppf" [358b5d0e-1f68-47f3-bc7d-ebbf387b2e40] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 15:52:37.232518    1544 system_pods.go:74] duration metric: took 4.674833ms to wait for pod list to return data ...
	I0719 15:52:37.232523    1544 default_sa.go:34] waiting for default service account to be created ...
	I0719 15:52:37.235194    1544 default_sa.go:45] found service account: "default"
	I0719 15:52:37.235203    1544 default_sa.go:55] duration metric: took 2.676875ms for default service account to be created ...
	I0719 15:52:37.235207    1544 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 15:52:37.262055    1544 system_pods.go:86] 10 kube-system pods found
	I0719 15:52:37.262063    1544 system_pods.go:89] "coredns-5d78c9869d-4dqmf" [3335e5bb-cd1b-4f88-8662-5f52c470886e] Running
	I0719 15:52:37.262066    1544 system_pods.go:89] "coredns-5d78c9869d-knvd5" [8982b7a7-b4c0-4d8f-aaac-24b2aa68727d] Running
	I0719 15:52:37.262069    1544 system_pods.go:89] "etcd-addons-101000" [76e23cae-d673-4ce3-8c1a-28173c4ad7fb] Running
	I0719 15:52:37.262071    1544 system_pods.go:89] "kube-apiserver-addons-101000" [b5b978e3-f407-4753-974c-733ac90e64b3] Running
	I0719 15:52:37.262073    1544 system_pods.go:89] "kube-controller-manager-addons-101000" [d4160a3e-e6e8-450b-8484-7fe93b6a935c] Running
	I0719 15:52:37.262075    1544 system_pods.go:89] "kube-proxy-jpdlk" [e6fd914c-3322-4343-bfc3-e20cc61a9c1b] Running
	I0719 15:52:37.262078    1544 system_pods.go:89] "kube-scheduler-addons-101000" [22d2fcb7-43d8-4531-958e-506bd7b1d4b6] Running
	I0719 15:52:37.262080    1544 system_pods.go:89] "metrics-server-844d8db974-vt8ml" [a25120a0-a3f2-4a32-851b-21a7b451818f] Running
	I0719 15:52:37.262085    1544 system_pods.go:89] "snapshot-controller-75bbb956b9-9qbz2" [449931f4-f646-44cb-b8ae-3246b6e14db1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 15:52:37.262089    1544 system_pods.go:89] "snapshot-controller-75bbb956b9-gsppf" [358b5d0e-1f68-47f3-bc7d-ebbf387b2e40] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 15:52:37.262093    1544 system_pods.go:126] duration metric: took 26.883625ms to wait for k8s-apps to be running ...
	I0719 15:52:37.262096    1544 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 15:52:37.262153    1544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:37.267299    1544 system_svc.go:56] duration metric: took 5.200291ms WaitForService to wait for kubelet.
	I0719 15:52:37.267305    1544 kubeadm.go:581] duration metric: took 6.646776125s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0719 15:52:37.267313    1544 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:52:37.460427    1544 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0719 15:52:37.460461    1544 node_conditions.go:123] node cpu capacity is 2
	I0719 15:52:37.460466    1544 node_conditions.go:105] duration metric: took 193.152542ms to run NodePressure ...
	I0719 15:52:37.460471    1544 start.go:228] waiting for startup goroutines ...
	I0719 15:52:37.735761    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:38.234684    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:38.735371    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:39.235765    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:39.735907    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:40.235373    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:40.734681    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:41.235287    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:41.735207    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:42.235545    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:42.734812    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:43.235286    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:43.736840    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:44.235205    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:44.738509    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:45.235248    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:45.735521    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:46.236102    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:46.735748    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:47.235790    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:47.736196    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:48.235182    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:48.735185    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:49.237494    1544 kapi.go:107] duration metric: took 12.013055167s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0719 15:52:49.242337    1544 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-101000 cluster.
	I0719 15:52:49.247318    1544 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0719 15:52:49.251640    1544 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0719 15:58:30.120713    1544 kapi.go:107] duration metric: took 6m0.008647875s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0719 15:58:30.121029    1544 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0719 15:58:30.144635    1544 kapi.go:107] duration metric: took 6m0.001552791s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0719 15:58:30.144676    1544 kapi.go:107] duration metric: took 6m0.011563667s to wait for kubernetes.io/minikube-addons=registry ...
	W0719 15:58:30.144755    1544 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	W0719 15:58:30.144795    1544 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0719 15:58:30.152624    1544 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, cloud-spanner, inspektor-gadget, default-storageclass, metrics-server, volumesnapshots, gcp-auth
	I0719 15:58:30.164713    1544 addons.go:502] enable addons completed in 6m0.066202625s: enabled=[ingress-dns storage-provisioner cloud-spanner inspektor-gadget default-storageclass metrics-server volumesnapshots gcp-auth]
	I0719 15:58:30.164763    1544 start.go:233] waiting for cluster config update ...
	I0719 15:58:30.164788    1544 start.go:242] writing updated cluster config ...
	I0719 15:58:30.169625    1544 ssh_runner.go:195] Run: rm -f paused
	I0719 15:58:30.237703    1544 start.go:578] kubectl: 1.27.2, cluster: 1.27.3 (minor skew: 0)
	I0719 15:58:30.241711    1544 out.go:177] * Done! kubectl is now configured to use "addons-101000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-07-19 22:51:58 UTC, ends at Wed 2023-07-19 23:18:36 UTC. --
	Jul 19 22:52:45 addons-101000 dockerd[1149]: time="2023-07-19T22:52:45.710423798Z" level=warning msg="reference for unknown type: " digest="sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Jul 19 22:52:48 addons-101000 cri-dockerd[1051]: time="2023-07-19T22:52:48Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Jul 19 22:52:48 addons-101000 dockerd[1155]: time="2023-07-19T22:52:48.502093407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 22:52:48 addons-101000 dockerd[1155]: time="2023-07-19T22:52:48.502123765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 22:52:48 addons-101000 dockerd[1155]: time="2023-07-19T22:52:48.502132147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 22:52:48 addons-101000 dockerd[1155]: time="2023-07-19T22:52:48.502138277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 23:10:37 addons-101000 dockerd[1155]: time="2023-07-19T23:10:37.526761438Z" level=info msg="shim disconnected" id=1916a4d50b456c96dde6cfb8c9a0592f96f7f3a97e32a9be366ec87324697279 namespace=moby
	Jul 19 23:10:37 addons-101000 dockerd[1155]: time="2023-07-19T23:10:37.526811146Z" level=warning msg="cleaning up after shim disconnected" id=1916a4d50b456c96dde6cfb8c9a0592f96f7f3a97e32a9be366ec87324697279 namespace=moby
	Jul 19 23:10:37 addons-101000 dockerd[1155]: time="2023-07-19T23:10:37.526818938Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 23:10:37 addons-101000 dockerd[1149]: time="2023-07-19T23:10:37.527016893Z" level=info msg="ignoring event" container=1916a4d50b456c96dde6cfb8c9a0592f96f7f3a97e32a9be366ec87324697279 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 23:10:37 addons-101000 dockerd[1155]: time="2023-07-19T23:10:37.596743616Z" level=info msg="shim disconnected" id=6777dc48b461a60a564d4c6d897d03aa240f0f002485e017fbb91565bbbc234a namespace=moby
	Jul 19 23:10:37 addons-101000 dockerd[1155]: time="2023-07-19T23:10:37.596774491Z" level=warning msg="cleaning up after shim disconnected" id=6777dc48b461a60a564d4c6d897d03aa240f0f002485e017fbb91565bbbc234a namespace=moby
	Jul 19 23:10:37 addons-101000 dockerd[1155]: time="2023-07-19T23:10:37.596779074Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 23:10:37 addons-101000 dockerd[1149]: time="2023-07-19T23:10:37.596907072Z" level=info msg="ignoring event" container=6777dc48b461a60a564d4c6d897d03aa240f0f002485e017fbb91565bbbc234a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 23:12:10 addons-101000 dockerd[1155]: time="2023-07-19T23:12:10.941118507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 23:12:10 addons-101000 dockerd[1155]: time="2023-07-19T23:12:10.941154798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 23:12:10 addons-101000 dockerd[1155]: time="2023-07-19T23:12:10.941371212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 23:12:10 addons-101000 dockerd[1155]: time="2023-07-19T23:12:10.941385587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 23:12:11 addons-101000 cri-dockerd[1051]: time="2023-07-19T23:12:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/97604dcd5e154149edefbf9219c003b4875f842d78dcc25aed66f8b4ef217365/resolv.conf as [nameserver 10.96.0.10 search headlamp.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 19 23:12:11 addons-101000 dockerd[1149]: time="2023-07-19T23:12:11.306418944Z" level=warning msg="reference for unknown type: " digest="sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45" remote="ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45"
	Jul 19 23:12:16 addons-101000 cri-dockerd[1051]: time="2023-07-19T23:12:16Z" level=info msg="Stop pulling image ghcr.io/headlamp-k8s/headlamp:v0.18.0@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45: Status: Downloaded newer image for ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45"
	Jul 19 23:12:16 addons-101000 dockerd[1155]: time="2023-07-19T23:12:16.589255166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 23:12:16 addons-101000 dockerd[1155]: time="2023-07-19T23:12:16.589620995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 23:12:16 addons-101000 dockerd[1155]: time="2023-07-19T23:12:16.589634120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 23:12:16 addons-101000 dockerd[1155]: time="2023-07-19T23:12:16.589638953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                     CREATED             STATE               NAME                         ATTEMPT             POD ID
	6912ff4c65e89       ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45                     6 minutes ago       Running             headlamp                     0                   97604dcd5e154
	1cc3491bfa533       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf              25 minutes ago      Running             gcp-auth                     0                   0ee1896a26320
	2b9a59faa57f0       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   25 minutes ago      Running             volume-snapshot-controller   0                   4d989e719f82a
	d47989aa362eb       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   25 minutes ago      Running             volume-snapshot-controller   0                   16bf9bed7271f
	0df05b2c74afc       97e04611ad434                                                                                                             26 minutes ago      Running             coredns                      0                   49ab25acb4281
	19588c52e552d       fb73e92641fd5                                                                                                             26 minutes ago      Running             kube-proxy                   0                   4d2bba9dbbd12
	f959c7f626d6e       24bc64e911039                                                                                                             26 minutes ago      Running             etcd                         0                   e9214702e68a5
	c6c632dd083f2       bcb9e554eaab6                                                                                                             26 minutes ago      Running             kube-scheduler               0                   956f93b928e2f
	862babcc9993e       ab3683b584ae5                                                                                                             26 minutes ago      Running             kube-controller-manager      0                   81af0dc9e0f17
	5984dda0d68af       39dfb036b0986                                                                                                             26 minutes ago      Running             kube-apiserver               0                   b8a23dc6dd212
	
	* 
	* ==> coredns [0df05b2c74af] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38035 - 43333 "HINFO IN 3178013197050500524.6871022848785512211. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004455439s
	[INFO] 10.244.0.9:44781 - 2820 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000096121s
	[INFO] 10.244.0.9:49458 - 31580 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000042327s
	[INFO] 10.244.0.9:47092 - 42379 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000038365s
	[INFO] 10.244.0.9:51547 - 25508 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000044495s
	[INFO] 10.244.0.9:33952 - 7053 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000043995s
	[INFO] 10.244.0.9:46272 - 9283 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000029482s
	[INFO] 10.244.0.9:44297 - 58626 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001146116s
	[INFO] 10.244.0.9:36915 - 33133 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001094699s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-101000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-101000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd270bde4b7e946c995f2329996587ae45fe53d4
	                    minikube.k8s.io/name=addons-101000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_19T15_52_16_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Jul 2023 22:52:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-101000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Jul 2023 23:18:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Jul 2023 23:17:26 +0000   Wed, 19 Jul 2023 22:52:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Jul 2023 23:17:26 +0000   Wed, 19 Jul 2023 22:52:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Jul 2023 23:17:26 +0000   Wed, 19 Jul 2023 22:52:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Jul 2023 23:17:26 +0000   Wed, 19 Jul 2023 22:52:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-101000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 06f7171a2e3b478b8a006d3ed11bcad4
	  System UUID:                06f7171a2e3b478b8a006d3ed11bcad4
	  Boot ID:                    388cb244-002c-43f0-bc4d-d5cefb6c596c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-58478865f7-hfg7x                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  headlamp                    headlamp-66f6498c69-gdc9w                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m26s
	  kube-system                 coredns-5d78c9869d-knvd5                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     26m
	  kube-system                 etcd-addons-101000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         26m
	  kube-system                 kube-apiserver-addons-101000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-controller-manager-addons-101000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-proxy-jpdlk                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-addons-101000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 snapshot-controller-75bbb956b9-9qbz2     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 snapshot-controller-75bbb956b9-gsppf     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26m                kube-proxy       
	  Normal  Starting                 26m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26m (x8 over 26m)  kubelet          Node addons-101000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26m (x8 over 26m)  kubelet          Node addons-101000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26m (x7 over 26m)  kubelet          Node addons-101000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 26m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  26m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  26m                kubelet          Node addons-101000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26m                kubelet          Node addons-101000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26m                kubelet          Node addons-101000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                26m                kubelet          Node addons-101000 status is now: NodeReady
	  Normal  RegisteredNode           26m                node-controller  Node addons-101000 event: Registered Node addons-101000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] KASLR disabled due to lack of seed
	[  +0.640919] EINJ: EINJ table not found.
	[  +0.493985] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.044020] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000805] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Jul19 22:52] systemd-fstab-generator[483]: Ignoring "noauto" for root device
	[  +0.066120] systemd-fstab-generator[494]: Ignoring "noauto" for root device
	[  +0.415432] systemd-fstab-generator[754]: Ignoring "noauto" for root device
	[  +0.174029] systemd-fstab-generator[789]: Ignoring "noauto" for root device
	[  +0.074321] systemd-fstab-generator[800]: Ignoring "noauto" for root device
	[  +0.082963] systemd-fstab-generator[813]: Ignoring "noauto" for root device
	[  +1.234248] systemd-fstab-generator[971]: Ignoring "noauto" for root device
	[  +0.083333] systemd-fstab-generator[982]: Ignoring "noauto" for root device
	[  +0.083834] systemd-fstab-generator[993]: Ignoring "noauto" for root device
	[  +0.077345] systemd-fstab-generator[1004]: Ignoring "noauto" for root device
	[  +0.087881] systemd-fstab-generator[1044]: Ignoring "noauto" for root device
	[  +2.511224] systemd-fstab-generator[1142]: Ignoring "noauto" for root device
	[  +2.197027] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.212424] systemd-fstab-generator[1454]: Ignoring "noauto" for root device
	[  +5.140733] systemd-fstab-generator[2321]: Ignoring "noauto" for root device
	[ +14.969880] kauditd_printk_skb: 41 callbacks suppressed
	[  +1.331585] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +4.144611] kauditd_printk_skb: 47 callbacks suppressed
	[  +7.075011] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.120561] kauditd_printk_skb: 3 callbacks suppressed
	
	* 
	* ==> etcd [f959c7f626d6] <==
	* {"level":"info","ts":"2023-07-19T22:52:13.557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2023-07-19T22:52:13.557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-07-19T22:52:13.558Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-101000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-19T22:52:13.558Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-19T22:52:13.558Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-19T22:52:13.558Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-19T22:52:13.558Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-19T22:52:13.558Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-19T22:52:13.559Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-07-19T22:52:13.559Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-19T22:52:13.559Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-19T22:52:13.559Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-19T22:52:13.559Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-19T23:02:13.581Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":796}
	{"level":"info","ts":"2023-07-19T23:02:13.583Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":796,"took":"1.901103ms","hash":1438469743}
	{"level":"info","ts":"2023-07-19T23:02:13.583Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1438469743,"revision":796,"compact-revision":-1}
	{"level":"info","ts":"2023-07-19T23:07:13.591Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":947}
	{"level":"info","ts":"2023-07-19T23:07:13.593Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":947,"took":"1.234408ms","hash":3746673681}
	{"level":"info","ts":"2023-07-19T23:07:13.593Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3746673681,"revision":947,"compact-revision":796}
	{"level":"info","ts":"2023-07-19T23:12:13.596Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1098}
	{"level":"info","ts":"2023-07-19T23:12:13.597Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1098,"took":"538.451µs","hash":2203992484}
	{"level":"info","ts":"2023-07-19T23:12:13.597Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2203992484,"revision":1098,"compact-revision":947}
	{"level":"info","ts":"2023-07-19T23:17:13.605Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1304}
	{"level":"info","ts":"2023-07-19T23:17:13.608Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1304,"took":"1.459903ms","hash":3587792068}
	{"level":"info","ts":"2023-07-19T23:17:13.608Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3587792068,"revision":1304,"compact-revision":1098}
	
	* 
	* ==> gcp-auth [1cc3491bfa53] <==
	* 2023/07/19 22:52:48 GCP Auth Webhook started!
	2023/07/19 23:12:10 Ready to marshal response ...
	2023/07/19 23:12:10 Ready to write response ...
	2023/07/19 23:12:10 Ready to marshal response ...
	2023/07/19 23:12:10 Ready to write response ...
	2023/07/19 23:12:10 Ready to marshal response ...
	2023/07/19 23:12:10 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  23:18:37 up 26 min,  0 users,  load average: 0.29, 0.37, 0.33
	Linux addons-101000 5.10.57 #1 SMP PREEMPT Fri Jul 14 22:49:12 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [5984dda0d68a] <==
	* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 23:11:38.207005       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0719 23:12:10.560685       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs=map[IPv4:10.106.68.140]
	I0719 23:12:14.240491       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 23:12:14.240512       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 23:12:14.240576       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 23:12:14.240585       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 23:12:14.243887       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 23:12:14.243904       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0719 23:13:38.208145       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0719 23:13:38.208225       1 handler_proxy.go:100] no RequestInfo found in the context
	E0719 23:13:38.208305       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 23:13:38.208326       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0719 23:17:14.245472       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 23:17:14.245567       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 23:17:14.248818       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 23:17:14.248863       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 23:17:14.250212       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 23:17:14.250271       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0719 23:17:38.208543       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0719 23:17:38.208619       1 handler_proxy.go:100] no RequestInfo found in the context
	E0719 23:17:38.208697       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 23:17:38.209061       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [862babcc9993] <==
	* I0719 22:52:45.220523       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0719 22:52:45.220604       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0719 22:52:45.235773       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:52:46.163044       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:52:46.176901       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:52:46.183652       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:52:46.183863       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0719 22:52:46.193621       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:52:59.164951       1 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0719 22:52:59.165085       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0719 22:52:59.266614       1 shared_informer.go:318] Caches are synced for resource quota
	I0719 22:52:59.590599       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0719 22:52:59.691557       1 shared_informer.go:318] Caches are synced for garbage collector
	I0719 22:53:15.026654       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0719 22:53:15.048574       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0719 22:53:16.014227       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:53:16.037423       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 23:12:10.570902       1 event.go:307] "Event occurred" object="headlamp/headlamp" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set headlamp-66f6498c69 to 1"
	I0719 23:12:10.577203       1 event.go:307] "Event occurred" object="headlamp/headlamp-66f6498c69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"headlamp-66f6498c69-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found"
	E0719 23:12:10.579948       1 replica_set.go:544] sync "headlamp/headlamp-66f6498c69" failed with pods "headlamp-66f6498c69-" is forbidden: error looking up service account headlamp/headlamp: serviceaccount "headlamp" not found
	I0719 23:12:10.594720       1 event.go:307] "Event occurred" object="headlamp/headlamp-66f6498c69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: headlamp-66f6498c69-gdc9w"
	E0719 23:18:24.822676       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0719 23:18:24.822822       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0719 23:18:29.087582       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0719 23:18:29.087738       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	
	* 
	* ==> kube-proxy [19588c52e552] <==
	* I0719 22:52:31.940544       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0719 22:52:31.940603       1 server_others.go:110] "Detected node IP" address="192.168.105.2"
	I0719 22:52:31.940629       1 server_others.go:554] "Using iptables proxy"
	I0719 22:52:31.978237       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0719 22:52:31.978247       1 server_others.go:192] "Using iptables Proxier"
	I0719 22:52:31.978272       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 22:52:31.978547       1 server.go:658] "Version info" version="v1.27.3"
	I0719 22:52:31.978554       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 22:52:31.980358       1 config.go:188] "Starting service config controller"
	I0719 22:52:31.980401       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0719 22:52:31.980465       1 config.go:97] "Starting endpoint slice config controller"
	I0719 22:52:31.980482       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0719 22:52:31.980953       1 config.go:315] "Starting node config controller"
	I0719 22:52:31.980984       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0719 22:52:32.081146       1 shared_informer.go:318] Caches are synced for node config
	I0719 22:52:32.081155       1 shared_informer.go:318] Caches are synced for service config
	I0719 22:52:32.081163       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [c6c632dd083f] <==
	* W0719 22:52:14.250631       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 22:52:14.250639       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 22:52:14.250680       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 22:52:14.250689       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 22:52:14.250732       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 22:52:14.250739       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0719 22:52:14.250771       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 22:52:14.250778       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 22:52:14.250830       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 22:52:14.250837       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 22:52:14.250850       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 22:52:14.250875       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0719 22:52:14.250933       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 22:52:14.250977       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0719 22:52:14.251012       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 22:52:14.251046       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 22:52:14.251076       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 22:52:14.251088       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 22:52:15.080697       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 22:52:15.080736       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 22:52:15.086216       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 22:52:15.086240       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0719 22:52:15.262476       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 22:52:15.262526       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0719 22:52:15.546346       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-07-19 22:51:58 UTC, ends at Wed 2023-07-19 23:18:37 UTC. --
	Jul 19 23:13:16 addons-101000 kubelet[2341]: E0719 23:13:16.802516    2341 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 19 23:13:16 addons-101000 kubelet[2341]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 23:13:16 addons-101000 kubelet[2341]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 23:13:16 addons-101000 kubelet[2341]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 19 23:14:16 addons-101000 kubelet[2341]: E0719 23:14:16.802055    2341 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 19 23:14:16 addons-101000 kubelet[2341]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 23:14:16 addons-101000 kubelet[2341]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 23:14:16 addons-101000 kubelet[2341]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 19 23:15:16 addons-101000 kubelet[2341]: E0719 23:15:16.806234    2341 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 19 23:15:16 addons-101000 kubelet[2341]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 23:15:16 addons-101000 kubelet[2341]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 23:15:16 addons-101000 kubelet[2341]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 19 23:16:16 addons-101000 kubelet[2341]: E0719 23:16:16.802041    2341 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 19 23:16:16 addons-101000 kubelet[2341]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 23:16:16 addons-101000 kubelet[2341]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 23:16:16 addons-101000 kubelet[2341]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 19 23:17:16 addons-101000 kubelet[2341]: W0719 23:17:16.780268    2341 machine.go:65] Cannot read vendor id correctly, set empty.
	Jul 19 23:17:16 addons-101000 kubelet[2341]: E0719 23:17:16.800675    2341 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 19 23:17:16 addons-101000 kubelet[2341]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 23:17:16 addons-101000 kubelet[2341]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 23:17:16 addons-101000 kubelet[2341]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 19 23:18:16 addons-101000 kubelet[2341]: E0719 23:18:16.804739    2341 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 19 23:18:16 addons-101000 kubelet[2341]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 23:18:16 addons-101000 kubelet[2341]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 23:18:16 addons-101000 kubelet[2341]:  > table=nat chain=KUBE-KUBELET-CANARY
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-101000 -n addons-101000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-101000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/InspektorGadget FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/InspektorGadget (480.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (720.91s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:535: failed waiting for csi-hostpath-driver pods to stabilize: context deadline exceeded
addons_test.go:537: csi-hostpath-driver pods stabilized in 6m0.002199125s
addons_test.go:540: (dbg) Run:  kubectl --context addons-101000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-101000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:546: failed waiting for PVC hpvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-101000 -n addons-101000
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-101000 logs -n 25
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-744000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT |                     |
	|         | -p download-only-744000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-744000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT |                     |
	|         | -p download-only-744000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT | 19 Jul 23 15:51 PDT |
	| delete  | -p download-only-744000        | download-only-744000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT | 19 Jul 23 15:51 PDT |
	| delete  | -p download-only-744000        | download-only-744000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT | 19 Jul 23 15:51 PDT |
	| start   | --download-only -p             | binary-mirror-101000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT |                     |
	|         | binary-mirror-101000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49310         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-101000        | binary-mirror-101000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT | 19 Jul 23 15:51 PDT |
	| start   | -p addons-101000               | addons-101000        | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT | 19 Jul 23 15:58 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-101000        | jenkins | v1.31.0 | 19 Jul 23 16:10 PDT |                     |
	|         | addons-101000                  |                      |         |         |                     |                     |
	| addons  | addons-101000 addons           | addons-101000        | jenkins | v1.31.0 | 19 Jul 23 16:10 PDT | 19 Jul 23 16:10 PDT |
	|         | disable metrics-server         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-101000        | jenkins | v1.31.0 | 19 Jul 23 16:12 PDT | 19 Jul 23 16:12 PDT |
	|         | -p addons-101000               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/19 15:51:46
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 15:51:46.194297    1544 out.go:296] Setting OutFile to fd 1 ...
	I0719 15:51:46.194422    1544 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 15:51:46.194425    1544 out.go:309] Setting ErrFile to fd 2...
	I0719 15:51:46.194428    1544 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 15:51:46.194533    1544 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 15:51:46.195601    1544 out.go:303] Setting JSON to false
	I0719 15:51:46.210650    1544 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1277,"bootTime":1689805829,"procs":414,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 15:51:46.210718    1544 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 15:51:46.215551    1544 out.go:177] * [addons-101000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 15:51:46.222492    1544 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 15:51:46.226347    1544 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 15:51:46.222559    1544 notify.go:220] Checking for updates...
	I0719 15:51:46.229495    1544 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 15:51:46.232495    1544 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 15:51:46.235514    1544 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 15:51:46.238453    1544 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 15:51:46.241624    1544 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 15:51:46.245483    1544 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 15:51:46.252546    1544 start.go:298] selected driver: qemu2
	I0719 15:51:46.252550    1544 start.go:880] validating driver "qemu2" against <nil>
	I0719 15:51:46.252555    1544 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 15:51:46.254378    1544 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0719 15:51:46.257444    1544 out.go:177] * Automatically selected the socket_vmnet network
	I0719 15:51:46.260545    1544 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:51:46.260571    1544 cni.go:84] Creating CNI manager for ""
	I0719 15:51:46.260577    1544 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 15:51:46.260582    1544 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 15:51:46.260588    1544 start_flags.go:319] config:
	{Name:addons-101000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-101000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:c
ni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 15:51:46.264618    1544 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:51:46.272375    1544 out.go:177] * Starting control plane node addons-101000 in cluster addons-101000
	I0719 15:51:46.276444    1544 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 15:51:46.276475    1544 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 15:51:46.276490    1544 cache.go:57] Caching tarball of preloaded images
	I0719 15:51:46.276551    1544 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 15:51:46.276557    1544 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0719 15:51:46.276783    1544 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/config.json ...
	I0719 15:51:46.276796    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/config.json: {Name:mk5e2042adc5d3df20329816c5917e6964724b56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:51:46.277018    1544 start.go:365] acquiring machines lock for addons-101000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 15:51:46.277112    1544 start.go:369] acquired machines lock for "addons-101000" in 87.917µs
	I0719 15:51:46.277123    1544 start.go:93] Provisioning new machine with config: &{Name:addons-101000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-101000
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 15:51:46.277150    1544 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 15:51:46.284454    1544 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0719 15:51:46.605350    1544 start.go:159] libmachine.API.Create for "addons-101000" (driver="qemu2")
	I0719 15:51:46.605388    1544 client.go:168] LocalClient.Create starting
	I0719 15:51:46.605532    1544 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 15:51:46.811960    1544 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 15:51:46.895754    1544 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 15:51:47.292066    1544 main.go:141] libmachine: Creating SSH key...
	I0719 15:51:47.506919    1544 main.go:141] libmachine: Creating Disk image...
	I0719 15:51:47.506931    1544 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 15:51:47.507226    1544 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/disk.qcow2
	I0719 15:51:47.541355    1544 main.go:141] libmachine: STDOUT: 
	I0719 15:51:47.541387    1544 main.go:141] libmachine: STDERR: 
	I0719 15:51:47.541456    1544 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/disk.qcow2 +20000M
	I0719 15:51:47.548773    1544 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 15:51:47.548786    1544 main.go:141] libmachine: STDERR: 
	I0719 15:51:47.548801    1544 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/disk.qcow2
	I0719 15:51:47.548806    1544 main.go:141] libmachine: Starting QEMU VM...
	I0719 15:51:47.548843    1544 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:3a:02:96:05:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/disk.qcow2
	I0719 15:51:47.614276    1544 main.go:141] libmachine: STDOUT: 
	I0719 15:51:47.614314    1544 main.go:141] libmachine: STDERR: 
	I0719 15:51:47.614319    1544 main.go:141] libmachine: Attempt 0
	I0719 15:51:47.614337    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:49.616493    1544 main.go:141] libmachine: Attempt 1
	I0719 15:51:49.616586    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:51.618782    1544 main.go:141] libmachine: Attempt 2
	I0719 15:51:51.618840    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:53.620905    1544 main.go:141] libmachine: Attempt 3
	I0719 15:51:53.620918    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:55.622923    1544 main.go:141] libmachine: Attempt 4
	I0719 15:51:55.622935    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:57.624701    1544 main.go:141] libmachine: Attempt 5
	I0719 15:51:57.624722    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:59.626797    1544 main.go:141] libmachine: Attempt 6
	I0719 15:51:59.626823    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:59.626967    1544 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0719 15:51:59.626997    1544 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:3a:2:96:5:da ID:1,36:3a:2:96:5:da Lease:0x64b9ba8e}
	I0719 15:51:59.627004    1544 main.go:141] libmachine: Found match: 36:3a:2:96:5:da
	I0719 15:51:59.627013    1544 main.go:141] libmachine: IP: 192.168.105.2
	I0719 15:51:59.627019    1544 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0719 15:52:01.648988    1544 machine.go:88] provisioning docker machine ...
	I0719 15:52:01.649049    1544 buildroot.go:166] provisioning hostname "addons-101000"
	I0719 15:52:01.650570    1544 main.go:141] libmachine: Using SSH client type: native
	I0719 15:52:01.651376    1544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10258d170] 0x10258fbd0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0719 15:52:01.651395    1544 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-101000 && echo "addons-101000" | sudo tee /etc/hostname
	I0719 15:52:01.738896    1544 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-101000
	
	I0719 15:52:01.739019    1544 main.go:141] libmachine: Using SSH client type: native
	I0719 15:52:01.739522    1544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10258d170] 0x10258fbd0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0719 15:52:01.739541    1544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-101000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-101000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-101000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:52:01.809650    1544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:52:01.809665    1544 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15585-1056/.minikube CaCertPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15585-1056/.minikube}
	I0719 15:52:01.809675    1544 buildroot.go:174] setting up certificates
	I0719 15:52:01.809702    1544 provision.go:83] configureAuth start
	I0719 15:52:01.809710    1544 provision.go:138] copyHostCerts
	I0719 15:52:01.809873    1544 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15585-1056/.minikube/key.pem (1675 bytes)
	I0719 15:52:01.810191    1544 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.pem (1082 bytes)
	I0719 15:52:01.810327    1544 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15585-1056/.minikube/cert.pem (1123 bytes)
	I0719 15:52:01.810465    1544 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca-key.pem org=jenkins.addons-101000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-101000]
	I0719 15:52:01.879682    1544 provision.go:172] copyRemoteCerts
	I0719 15:52:01.879750    1544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:52:01.879766    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:01.912803    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:52:01.919846    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0719 15:52:01.926696    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 15:52:01.934065    1544 provision.go:86] duration metric: configureAuth took 124.357417ms
	I0719 15:52:01.934074    1544 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:52:01.934167    1544 config.go:182] Loaded profile config "addons-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 15:52:01.934205    1544 main.go:141] libmachine: Using SSH client type: native
	I0719 15:52:01.934418    1544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10258d170] 0x10258fbd0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0719 15:52:01.934423    1544 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 15:52:01.991251    1544 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 15:52:01.991259    1544 buildroot.go:70] root file system type: tmpfs
	I0719 15:52:01.991320    1544 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 15:52:01.991364    1544 main.go:141] libmachine: Using SSH client type: native
	I0719 15:52:01.991596    1544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10258d170] 0x10258fbd0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0719 15:52:01.991636    1544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 15:52:02.049859    1544 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 15:52:02.049895    1544 main.go:141] libmachine: Using SSH client type: native
	I0719 15:52:02.050139    1544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10258d170] 0x10258fbd0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0719 15:52:02.050148    1544 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 15:52:02.386931    1544 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 15:52:02.386943    1544 machine.go:91] provisioned docker machine in 737.934792ms
	I0719 15:52:02.386948    1544 client.go:171] LocalClient.Create took 15.78172825s
	I0719 15:52:02.386964    1544 start.go:167] duration metric: libmachine.API.Create for "addons-101000" took 15.781794083s
	I0719 15:52:02.386972    1544 start.go:300] post-start starting for "addons-101000" (driver="qemu2")
	I0719 15:52:02.386977    1544 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:52:02.387049    1544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:52:02.387060    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:02.417331    1544 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:52:02.418797    1544 info.go:137] Remote host: Buildroot 2021.02.12
	I0719 15:52:02.418804    1544 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15585-1056/.minikube/addons for local assets ...
	I0719 15:52:02.418866    1544 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15585-1056/.minikube/files for local assets ...
	I0719 15:52:02.418892    1544 start.go:303] post-start completed in 31.917459ms
	I0719 15:52:02.419241    1544 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/config.json ...
	I0719 15:52:02.419386    1544 start.go:128] duration metric: createHost completed in 16.142407666s
	I0719 15:52:02.419424    1544 main.go:141] libmachine: Using SSH client type: native
	I0719 15:52:02.419636    1544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10258d170] 0x10258fbd0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0719 15:52:02.419640    1544 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:52:02.473900    1544 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689807122.468027460
	
	I0719 15:52:02.473908    1544 fix.go:206] guest clock: 1689807122.468027460
	I0719 15:52:02.473913    1544 fix.go:219] Guest: 2023-07-19 15:52:02.46802746 -0700 PDT Remote: 2023-07-19 15:52:02.419389 -0700 PDT m=+16.243794293 (delta=48.63846ms)
	I0719 15:52:02.473924    1544 fix.go:190] guest clock delta is within tolerance: 48.63846ms
	I0719 15:52:02.473927    1544 start.go:83] releasing machines lock for "addons-101000", held for 16.196985625s
	I0719 15:52:02.474254    1544 ssh_runner.go:195] Run: cat /version.json
	I0719 15:52:02.474266    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:02.474283    1544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:52:02.474307    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:02.504138    1544 ssh_runner.go:195] Run: systemctl --version
	I0719 15:52:02.506807    1544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:52:02.547286    1544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:52:02.547337    1544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:52:02.552537    1544 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:52:02.552544    1544 start.go:466] detecting cgroup driver to use...
	I0719 15:52:02.552636    1544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:52:02.558292    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 15:52:02.561659    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 15:52:02.565184    1544 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 15:52:02.565214    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 15:52:02.568252    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 15:52:02.571117    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 15:52:02.574394    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 15:52:02.578183    1544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:52:02.581670    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 15:52:02.585324    1544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:52:02.588146    1544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:52:02.590822    1544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:02.668293    1544 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 15:52:02.674070    1544 start.go:466] detecting cgroup driver to use...
	I0719 15:52:02.674127    1544 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 15:52:02.680608    1544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:52:02.685038    1544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:52:02.692942    1544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:52:02.697559    1544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 15:52:02.702616    1544 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 15:52:02.743950    1544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 15:52:02.749347    1544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:52:02.754962    1544 ssh_runner.go:195] Run: which cri-dockerd
	I0719 15:52:02.756311    1544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 15:52:02.758805    1544 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 15:52:02.763719    1544 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 15:52:02.840623    1544 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 15:52:02.914223    1544 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 15:52:02.914238    1544 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0719 15:52:02.919565    1544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:02.997357    1544 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 15:52:04.154714    1544 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.157354834s)
	I0719 15:52:04.154783    1544 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 15:52:04.233603    1544 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 15:52:04.316105    1544 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 15:52:04.398081    1544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:04.476954    1544 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 15:52:04.483791    1544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:04.563625    1544 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0719 15:52:04.586684    1544 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 15:52:04.586782    1544 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 15:52:04.589057    1544 start.go:534] Will wait 60s for crictl version
	I0719 15:52:04.589109    1544 ssh_runner.go:195] Run: which crictl
	I0719 15:52:04.590411    1544 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:52:04.605474    1544 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0719 15:52:04.605558    1544 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 15:52:04.615366    1544 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 15:52:04.635830    1544 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.4 ...
	I0719 15:52:04.635983    1544 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0719 15:52:04.637460    1544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:52:04.641595    1544 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 15:52:04.641637    1544 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 15:52:04.650900    1544 docker.go:636] Got preloaded images: 
	I0719 15:52:04.650907    1544 docker.go:642] registry.k8s.io/kube-apiserver:v1.27.3 wasn't preloaded
	I0719 15:52:04.650940    1544 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 15:52:04.654168    1544 ssh_runner.go:195] Run: which lz4
	I0719 15:52:04.655512    1544 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 15:52:04.656801    1544 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 15:52:04.656815    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635922 bytes)
	I0719 15:52:05.915036    1544 docker.go:600] Took 1.259592 seconds to copy over tarball
	I0719 15:52:05.915105    1544 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 15:52:06.974144    1544 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.059035792s)
	I0719 15:52:06.974158    1544 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 15:52:06.989746    1544 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 15:52:06.993185    1544 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0719 15:52:06.998174    1544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:07.075272    1544 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 15:52:09.295448    1544 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.220180667s)
	I0719 15:52:09.295552    1544 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 15:52:09.301832    1544 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0719 15:52:09.301841    1544 cache_images.go:84] Images are preloaded, skipping loading
	I0719 15:52:09.301925    1544 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0719 15:52:09.309268    1544 cni.go:84] Creating CNI manager for ""
	I0719 15:52:09.309280    1544 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 15:52:09.309309    1544 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0719 15:52:09.309319    1544 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-101000 NodeName:addons-101000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 15:52:09.309384    1544 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-101000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:52:09.309419    1544 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-101000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:addons-101000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0719 15:52:09.309476    1544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0719 15:52:09.312676    1544 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:52:09.312711    1544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:52:09.315418    1544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0719 15:52:09.320330    1544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 15:52:09.325480    1544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0719 15:52:09.330778    1544 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0719 15:52:09.332164    1544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:52:09.335590    1544 certs.go:56] Setting up /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000 for IP: 192.168.105.2
	I0719 15:52:09.335612    1544 certs.go:190] acquiring lock for shared ca certs: {Name:mk57268b94adc82cb06ba056d8f0acecf538b87f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.335779    1544 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.key
	I0719 15:52:09.375531    1544 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.crt ...
	I0719 15:52:09.375537    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.crt: {Name:mk18dc73651ebb7586f5cc870528fe59bb3eaca5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.375716    1544 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.key ...
	I0719 15:52:09.375718    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.key: {Name:mkf4847c0170d0ed2e02012567d5849b7cdc3e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.375829    1544 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.key
	I0719 15:52:09.479964    1544 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.crt ...
	I0719 15:52:09.479968    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.crt: {Name:mk931f43b9aeac1a637bc02f03d26df5c2c21559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.480104    1544 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.key ...
	I0719 15:52:09.480107    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.key: {Name:mkbbced5a2200a63ea6918cadfce8d25c9e09696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.480228    1544 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.key
	I0719 15:52:09.480236    1544 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt with IP's: []
	I0719 15:52:09.550153    1544 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt ...
	I0719 15:52:09.550157    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt: {Name:mkfbd0ec0d392f0ad08f01bd61787ea0a90ba52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.550273    1544 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.key ...
	I0719 15:52:09.550276    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.key: {Name:mk8c74eed8437e78eaa33e4b6b240669ae86a824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.550378    1544 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.key.96055969
	I0719 15:52:09.550392    1544 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0719 15:52:09.700054    1544 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.crt.96055969 ...
	I0719 15:52:09.700063    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.crt.96055969: {Name:mkf03671886dbbbb632ec2e172f912e064d8e1ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.700299    1544 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.key.96055969 ...
	I0719 15:52:09.700303    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.key.96055969: {Name:mk303c08d6b543a4cd38e9de14800a408d1d2869 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.700416    1544 certs.go:337] copying /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.crt
	I0719 15:52:09.700598    1544 certs.go:341] copying /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.key
	I0719 15:52:09.700687    1544 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.key
	I0719 15:52:09.700696    1544 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.crt with IP's: []
	I0719 15:52:09.741490    1544 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.crt ...
	I0719 15:52:09.741493    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.crt: {Name:mk78bf5cf7588d5f6faf8ac273455bded2325b7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.741610    1544 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.key ...
	I0719 15:52:09.741617    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.key: {Name:mkd247533695e3682be8e4d6fb67fe0e52efd3f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.741840    1544 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 15:52:09.741864    1544 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:52:09.741886    1544 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:52:09.741911    1544 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/key.pem (1675 bytes)
	I0719 15:52:09.742194    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0719 15:52:09.749753    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 15:52:09.756925    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:52:09.764081    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 15:52:09.770653    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:52:09.777938    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 15:52:09.785429    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:52:09.792823    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:52:09.799580    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:52:09.806238    1544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:52:09.812452    1544 ssh_runner.go:195] Run: openssl version
	I0719 15:52:09.814304    1544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:52:09.817773    1544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:52:09.819362    1544 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 19 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:52:09.819381    1544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:52:09.821228    1544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:52:09.824165    1544 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0719 15:52:09.825567    1544 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0719 15:52:09.825608    1544 kubeadm.go:404] StartCluster: {Name:addons-101000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-101000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 15:52:09.825669    1544 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 15:52:09.831218    1544 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:52:09.834469    1544 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:52:09.837516    1544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:52:09.840473    1544 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:52:09.840487    1544 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 15:52:09.861549    1544 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0719 15:52:09.861581    1544 kubeadm.go:322] [preflight] Running pre-flight checks
	I0719 15:52:09.914251    1544 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 15:52:09.914308    1544 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 15:52:09.914371    1544 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 15:52:09.979760    1544 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 15:52:09.988950    1544 out.go:204]   - Generating certificates and keys ...
	I0719 15:52:09.988981    1544 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0719 15:52:09.989014    1544 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0719 15:52:10.135496    1544 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 15:52:10.224881    1544 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0719 15:52:10.328051    1544 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0719 15:52:10.529629    1544 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0719 15:52:10.721019    1544 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0719 15:52:10.721090    1544 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-101000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0719 15:52:10.787563    1544 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0719 15:52:10.787619    1544 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-101000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0719 15:52:10.835004    1544 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 15:52:10.905361    1544 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 15:52:10.998864    1544 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0719 15:52:10.998890    1544 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 15:52:11.030652    1544 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 15:52:11.128642    1544 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 15:52:11.289310    1544 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 15:52:11.400745    1544 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 15:52:11.407437    1544 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 15:52:11.407496    1544 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 15:52:11.407517    1544 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0719 15:52:11.489012    1544 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 15:52:11.495210    1544 out.go:204]   - Booting up control plane ...
	I0719 15:52:11.495279    1544 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 15:52:11.495324    1544 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 15:52:11.495356    1544 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 15:52:11.495394    1544 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 15:52:11.496331    1544 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 15:52:15.497794    1544 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.001180 seconds
	I0719 15:52:15.497922    1544 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 15:52:15.503484    1544 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 15:52:16.021124    1544 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 15:52:16.021263    1544 kubeadm.go:322] [mark-control-plane] Marking the node addons-101000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 15:52:16.527825    1544 kubeadm.go:322] [bootstrap-token] Using token: za2ad5.mzmzgft4t0cdmv0r
	I0719 15:52:16.534544    1544 out.go:204]   - Configuring RBAC rules ...
	I0719 15:52:16.534604    1544 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 15:52:16.535903    1544 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 15:52:16.539838    1544 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 15:52:16.541602    1544 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 15:52:16.542999    1544 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 15:52:16.544402    1544 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 15:52:16.549043    1544 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 15:52:16.720317    1544 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0719 15:52:16.937953    1544 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0719 15:52:16.938484    1544 kubeadm.go:322] 
	I0719 15:52:16.938519    1544 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0719 15:52:16.938527    1544 kubeadm.go:322] 
	I0719 15:52:16.938563    1544 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0719 15:52:16.938588    1544 kubeadm.go:322] 
	I0719 15:52:16.938601    1544 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0719 15:52:16.938634    1544 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 15:52:16.938667    1544 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 15:52:16.938672    1544 kubeadm.go:322] 
	I0719 15:52:16.938695    1544 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0719 15:52:16.938698    1544 kubeadm.go:322] 
	I0719 15:52:16.938723    1544 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 15:52:16.938730    1544 kubeadm.go:322] 
	I0719 15:52:16.938754    1544 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0719 15:52:16.938790    1544 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 15:52:16.938841    1544 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 15:52:16.938844    1544 kubeadm.go:322] 
	I0719 15:52:16.938885    1544 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 15:52:16.938940    1544 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0719 15:52:16.938947    1544 kubeadm.go:322] 
	I0719 15:52:16.938990    1544 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token za2ad5.mzmzgft4t0cdmv0r \
	I0719 15:52:16.939068    1544 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ac94190d213987345f5634b8d3500a45c06dcf10dacd514b79727b35e18f9f90 \
	I0719 15:52:16.939079    1544 kubeadm.go:322] 	--control-plane 
	I0719 15:52:16.939082    1544 kubeadm.go:322] 
	I0719 15:52:16.939124    1544 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0719 15:52:16.939129    1544 kubeadm.go:322] 
	I0719 15:52:16.939171    1544 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token za2ad5.mzmzgft4t0cdmv0r \
	I0719 15:52:16.939230    1544 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ac94190d213987345f5634b8d3500a45c06dcf10dacd514b79727b35e18f9f90 
	I0719 15:52:16.939287    1544 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 15:52:16.939293    1544 cni.go:84] Creating CNI manager for ""
	I0719 15:52:16.939300    1544 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 15:52:16.943663    1544 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 15:52:16.946263    1544 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 15:52:16.949272    1544 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0719 15:52:16.953970    1544 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 15:52:16.954045    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:16.954043    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=fd270bde4b7e946c995f2329996587ae45fe53d4 minikube.k8s.io/name=addons-101000 minikube.k8s.io/updated_at=2023_07_19T15_52_16_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:17.018155    1544 ops.go:34] apiserver oom_adj: -16
	I0719 15:52:17.018193    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:17.554061    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:18.053987    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:18.552765    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:19.054043    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:19.552096    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:20.054238    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:20.554269    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:21.054238    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:21.554235    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:22.054268    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:22.554252    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:23.054226    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:23.554175    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:24.054171    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:24.553977    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:25.054181    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:25.553942    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:26.053975    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:26.553905    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:27.053866    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:27.553341    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:28.052551    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:28.553332    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:29.053858    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:29.553880    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:30.052491    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:30.101801    1544 kubeadm.go:1081] duration metric: took 13.147921333s to wait for elevateKubeSystemPrivileges.
	I0719 15:52:30.101816    1544 kubeadm.go:406] StartCluster complete in 20.276428833s
	I0719 15:52:30.101824    1544 settings.go:142] acquiring lock: {Name:mk58631521ffd49c3231a31589bcae3549c3b53c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:30.101973    1544 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 15:52:30.102164    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/kubeconfig: {Name:mk508b0ad49e7803e8fd5dcb96b45d1248a097b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:30.102360    1544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0719 15:52:30.102400    1544 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0719 15:52:30.102441    1544 addons.go:69] Setting volumesnapshots=true in profile "addons-101000"
	I0719 15:52:30.102447    1544 addons.go:231] Setting addon volumesnapshots=true in "addons-101000"
	I0719 15:52:30.102465    1544 addons.go:69] Setting metrics-server=true in profile "addons-101000"
	I0719 15:52:30.102475    1544 addons.go:231] Setting addon metrics-server=true in "addons-101000"
	I0719 15:52:30.102477    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102506    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102510    1544 addons.go:69] Setting ingress=true in profile "addons-101000"
	I0719 15:52:30.102533    1544 addons.go:69] Setting ingress-dns=true in profile "addons-101000"
	I0719 15:52:30.102557    1544 addons.go:231] Setting addon ingress=true in "addons-101000"
	I0719 15:52:30.102537    1544 addons.go:69] Setting inspektor-gadget=true in profile "addons-101000"
	I0719 15:52:30.102572    1544 addons.go:231] Setting addon inspektor-gadget=true in "addons-101000"
	I0719 15:52:30.102587    1544 addons.go:69] Setting registry=true in profile "addons-101000"
	I0719 15:52:30.102609    1544 addons.go:231] Setting addon registry=true in "addons-101000"
	I0719 15:52:30.102627    1544 addons.go:231] Setting addon ingress-dns=true in "addons-101000"
	I0719 15:52:30.102650    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102671    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102679    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102685    1544 config.go:182] Loaded profile config "addons-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 15:52:30.102713    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102729    1544 addons.go:69] Setting storage-provisioner=true in profile "addons-101000"
	I0719 15:52:30.102734    1544 addons.go:231] Setting addon storage-provisioner=true in "addons-101000"
	I0719 15:52:30.102749    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102918    1544 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-101000"
	I0719 15:52:30.102930    1544 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-101000"
	I0719 15:52:30.102932    1544 addons.go:69] Setting cloud-spanner=true in profile "addons-101000"
	I0719 15:52:30.102942    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102946    1544 addons.go:231] Setting addon cloud-spanner=true in "addons-101000"
	I0719 15:52:30.102975    1544 host.go:66] Checking if "addons-101000" exists ...
	W0719 15:52:30.103028    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	W0719 15:52:30.103039    1544 addons.go:277] "addons-101000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0719 15:52:30.103042    1544 addons.go:467] Verifying addon ingress=true in "addons-101000"
	I0719 15:52:30.106383    1544 out.go:177] * Verifying ingress addon...
	W0719 15:52:30.103028    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	W0719 15:52:30.103167    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	W0719 15:52:30.103182    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	I0719 15:52:30.103217    1544 addons.go:69] Setting default-storageclass=true in profile "addons-101000"
	W0719 15:52:30.103227    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	W0719 15:52:30.103229    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	I0719 15:52:30.103236    1544 addons.go:69] Setting gcp-auth=true in profile "addons-101000"
	W0719 15:52:30.103647    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	W0719 15:52:30.115464    1544 addons.go:277] "addons-101000" is not running, setting registry=true and skipping enablement (err=<nil>)
	W0719 15:52:30.115470    1544 addons.go:277] "addons-101000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	W0719 15:52:30.115473    1544 addons.go:277] "addons-101000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W0719 15:52:30.115476    1544 addons.go:277] "addons-101000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0719 15:52:30.115487    1544 mustload.go:65] Loading cluster: addons-101000
	W0719 15:52:30.115490    1544 addons.go:277] "addons-101000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	I0719 15:52:30.115530    1544 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-101000"
	W0719 15:52:30.115554    1544 addons.go:277] "addons-101000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	I0719 15:52:30.115896    1544 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0719 15:52:30.118371    1544 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0719 15:52:30.125455    1544 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0719 15:52:30.125463    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0719 15:52:30.125471    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:30.122419    1544 addons.go:467] Verifying addon registry=true in "addons-101000"
	I0719 15:52:30.122435    1544 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.3
	I0719 15:52:30.133254    1544 out.go:177] * Verifying registry addon...
	I0719 15:52:30.122534    1544 config.go:182] Loaded profile config "addons-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 15:52:30.122472    1544 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-101000"
	I0719 15:52:30.126814    1544 addons.go:231] Setting addon default-storageclass=true in "addons-101000"
	I0719 15:52:30.127301    1544 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0719 15:52:30.129401    1544 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 15:52:30.136476    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 15:52:30.136487    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:30.139357    1544 out.go:177] * Verifying csi-hostpath-driver addon...
	I0719 15:52:30.136602    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.137003    1544 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0719 15:52:30.137526    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.146971    1544 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0719 15:52:30.147364    1544 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 15:52:30.147370    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 15:52:30.147377    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:30.154043    1544 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0719 15:52:30.155110    1544 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0719 15:52:30.167928    1544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0719 15:52:30.178534    1544 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0719 15:52:30.178543    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0719 15:52:30.183588    1544 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0719 15:52:30.183597    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0719 15:52:30.191691    1544 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0719 15:52:30.191699    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0719 15:52:30.203614    1544 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 15:52:30.203623    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0719 15:52:30.208695    1544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 15:52:30.219630    1544 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 15:52:30.219641    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 15:52:30.228005    1544 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0719 15:52:30.228015    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0719 15:52:30.232889    1544 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 15:52:30.232896    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0719 15:52:30.237389    1544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 15:52:30.293905    1544 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:52:30.293918    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 15:52:30.323074    1544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:52:30.620570    1544 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-101000" context rescaled to 1 replicas
	I0719 15:52:30.620588    1544 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 15:52:30.627928    1544 out.go:177] * Verifying Kubernetes components...
	I0719 15:52:30.631983    1544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:30.771514    1544 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	W0719 15:52:30.880793    1544 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0719 15:52:30.880817    1544 retry.go:31] will retry after 267.649411ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0719 15:52:30.944103    1544 addons.go:467] Verifying addon metrics-server=true in "addons-101000"
	I0719 15:52:30.944554    1544 node_ready.go:35] waiting up to 6m0s for node "addons-101000" to be "Ready" ...
	I0719 15:52:30.946971    1544 node_ready.go:49] node "addons-101000" has status "Ready":"True"
	I0719 15:52:30.946984    1544 node_ready.go:38] duration metric: took 2.412542ms waiting for node "addons-101000" to be "Ready" ...
	I0719 15:52:30.946988    1544 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:30.951440    1544 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-4dqmf" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:31.148622    1544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 15:52:32.960900    1544 pod_ready.go:102] pod "coredns-5d78c9869d-4dqmf" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:33.460293    1544 pod_ready.go:92] pod "coredns-5d78c9869d-4dqmf" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:33.460303    1544 pod_ready.go:81] duration metric: took 2.50887925s waiting for pod "coredns-5d78c9869d-4dqmf" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.460308    1544 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-knvd5" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.463445    1544 pod_ready.go:92] pod "coredns-5d78c9869d-knvd5" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:33.463453    1544 pod_ready.go:81] duration metric: took 3.140459ms waiting for pod "coredns-5d78c9869d-knvd5" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.463458    1544 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.466021    1544 pod_ready.go:92] pod "etcd-addons-101000" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:33.466025    1544 pod_ready.go:81] duration metric: took 2.564083ms waiting for pod "etcd-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.466029    1544 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.468619    1544 pod_ready.go:92] pod "kube-apiserver-addons-101000" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:33.468625    1544 pod_ready.go:81] duration metric: took 2.592875ms waiting for pod "kube-apiserver-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.468629    1544 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.471097    1544 pod_ready.go:92] pod "kube-controller-manager-addons-101000" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:33.471103    1544 pod_ready.go:81] duration metric: took 2.47075ms waiting for pod "kube-controller-manager-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.471106    1544 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jpdlk" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.691576    1544 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.542959542s)
	I0719 15:52:33.857829    1544 pod_ready.go:92] pod "kube-proxy-jpdlk" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:33.857838    1544 pod_ready.go:81] duration metric: took 386.731917ms waiting for pod "kube-proxy-jpdlk" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.857843    1544 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:34.259787    1544 pod_ready.go:92] pod "kube-scheduler-addons-101000" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:34.259798    1544 pod_ready.go:81] duration metric: took 401.956208ms waiting for pod "kube-scheduler-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:34.259802    1544 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-844d8db974-vt8ml" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:36.666794    1544 pod_ready.go:102] pod "metrics-server-844d8db974-vt8ml" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:36.752105    1544 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0719 15:52:36.752122    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:36.786675    1544 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0719 15:52:36.794559    1544 addons.go:231] Setting addon gcp-auth=true in "addons-101000"
	I0719 15:52:36.794583    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:36.795348    1544 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0719 15:52:36.795361    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:36.829668    1544 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0719 15:52:36.833634    1544 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0719 15:52:36.837615    1544 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0719 15:52:36.837621    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0719 15:52:36.843015    1544 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0719 15:52:36.843020    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0719 15:52:36.847964    1544 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0719 15:52:36.847971    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0719 15:52:36.853694    1544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0719 15:52:37.162175    1544 pod_ready.go:92] pod "metrics-server-844d8db974-vt8ml" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:37.162186    1544 pod_ready.go:81] duration metric: took 2.902411833s waiting for pod "metrics-server-844d8db974-vt8ml" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:37.162191    1544 pod_ready.go:38] duration metric: took 6.215265042s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:37.162200    1544 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:52:37.162261    1544 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:52:37.213333    1544 api_server.go:72] duration metric: took 6.592798333s to wait for apiserver process to appear ...
	I0719 15:52:37.213345    1544 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:52:37.213352    1544 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0719 15:52:37.213987    1544 addons.go:467] Verifying addon gcp-auth=true in "addons-101000"
	I0719 15:52:37.217265    1544 out.go:177] * Verifying gcp-auth addon...
	I0719 15:52:37.224564    1544 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0719 15:52:37.226285    1544 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0719 15:52:37.227830    1544 api_server.go:141] control plane version: v1.27.3
	I0719 15:52:37.227837    1544 api_server.go:131] duration metric: took 14.489167ms to wait for apiserver health ...
	I0719 15:52:37.227841    1544 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:52:37.231500    1544 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0719 15:52:37.231508    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:37.232481    1544 system_pods.go:59] 10 kube-system pods found
	I0719 15:52:37.232488    1544 system_pods.go:61] "coredns-5d78c9869d-4dqmf" [3335e5bb-cd1b-4f88-8662-5f52c470886e] Running
	I0719 15:52:37.232491    1544 system_pods.go:61] "coredns-5d78c9869d-knvd5" [8982b7a7-b4c0-4d8f-aaac-24b2aa68727d] Running
	I0719 15:52:37.232493    1544 system_pods.go:61] "etcd-addons-101000" [76e23cae-d673-4ce3-8c1a-28173c4ad7fb] Running
	I0719 15:52:37.232495    1544 system_pods.go:61] "kube-apiserver-addons-101000" [b5b978e3-f407-4753-974c-733ac90e64b3] Running
	I0719 15:52:37.232497    1544 system_pods.go:61] "kube-controller-manager-addons-101000" [d4160a3e-e6e8-450b-8484-7fe93b6a935c] Running
	I0719 15:52:37.232500    1544 system_pods.go:61] "kube-proxy-jpdlk" [e6fd914c-3322-4343-bfc3-e20cc61a9c1b] Running
	I0719 15:52:37.232503    1544 system_pods.go:61] "kube-scheduler-addons-101000" [22d2fcb7-43d8-4531-958e-506bd7b1d4b6] Running
	I0719 15:52:37.232506    1544 system_pods.go:61] "metrics-server-844d8db974-vt8ml" [a25120a0-a3f2-4a32-851b-21a7b451818f] Running
	I0719 15:52:37.232510    1544 system_pods.go:61] "snapshot-controller-75bbb956b9-9qbz2" [449931f4-f646-44cb-b8ae-3246b6e14db1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 15:52:37.232514    1544 system_pods.go:61] "snapshot-controller-75bbb956b9-gsppf" [358b5d0e-1f68-47f3-bc7d-ebbf387b2e40] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 15:52:37.232518    1544 system_pods.go:74] duration metric: took 4.674833ms to wait for pod list to return data ...
	I0719 15:52:37.232523    1544 default_sa.go:34] waiting for default service account to be created ...
	I0719 15:52:37.235194    1544 default_sa.go:45] found service account: "default"
	I0719 15:52:37.235203    1544 default_sa.go:55] duration metric: took 2.676875ms for default service account to be created ...
	I0719 15:52:37.235207    1544 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 15:52:37.262055    1544 system_pods.go:86] 10 kube-system pods found
	I0719 15:52:37.262063    1544 system_pods.go:89] "coredns-5d78c9869d-4dqmf" [3335e5bb-cd1b-4f88-8662-5f52c470886e] Running
	I0719 15:52:37.262066    1544 system_pods.go:89] "coredns-5d78c9869d-knvd5" [8982b7a7-b4c0-4d8f-aaac-24b2aa68727d] Running
	I0719 15:52:37.262069    1544 system_pods.go:89] "etcd-addons-101000" [76e23cae-d673-4ce3-8c1a-28173c4ad7fb] Running
	I0719 15:52:37.262071    1544 system_pods.go:89] "kube-apiserver-addons-101000" [b5b978e3-f407-4753-974c-733ac90e64b3] Running
	I0719 15:52:37.262073    1544 system_pods.go:89] "kube-controller-manager-addons-101000" [d4160a3e-e6e8-450b-8484-7fe93b6a935c] Running
	I0719 15:52:37.262075    1544 system_pods.go:89] "kube-proxy-jpdlk" [e6fd914c-3322-4343-bfc3-e20cc61a9c1b] Running
	I0719 15:52:37.262078    1544 system_pods.go:89] "kube-scheduler-addons-101000" [22d2fcb7-43d8-4531-958e-506bd7b1d4b6] Running
	I0719 15:52:37.262080    1544 system_pods.go:89] "metrics-server-844d8db974-vt8ml" [a25120a0-a3f2-4a32-851b-21a7b451818f] Running
	I0719 15:52:37.262085    1544 system_pods.go:89] "snapshot-controller-75bbb956b9-9qbz2" [449931f4-f646-44cb-b8ae-3246b6e14db1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 15:52:37.262089    1544 system_pods.go:89] "snapshot-controller-75bbb956b9-gsppf" [358b5d0e-1f68-47f3-bc7d-ebbf387b2e40] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 15:52:37.262093    1544 system_pods.go:126] duration metric: took 26.883625ms to wait for k8s-apps to be running ...
	I0719 15:52:37.262096    1544 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 15:52:37.262153    1544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:37.267299    1544 system_svc.go:56] duration metric: took 5.200291ms WaitForService to wait for kubelet.
	I0719 15:52:37.267305    1544 kubeadm.go:581] duration metric: took 6.646776125s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0719 15:52:37.267313    1544 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:52:37.460427    1544 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0719 15:52:37.460461    1544 node_conditions.go:123] node cpu capacity is 2
	I0719 15:52:37.460466    1544 node_conditions.go:105] duration metric: took 193.152542ms to run NodePressure ...
	I0719 15:52:37.460471    1544 start.go:228] waiting for startup goroutines ...
	I0719 15:52:37.735761    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:38.234684    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:38.735371    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:39.235765    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:39.735907    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:40.235373    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:40.734681    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:41.235287    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:41.735207    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:42.235545    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:42.734812    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:43.235286    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:43.736840    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:44.235205    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:44.738509    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:45.235248    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:45.735521    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:46.236102    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:46.735748    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:47.235790    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:47.736196    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:48.235182    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:48.735185    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:49.237494    1544 kapi.go:107] duration metric: took 12.013055167s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0719 15:52:49.242337    1544 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-101000 cluster.
	I0719 15:52:49.247318    1544 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0719 15:52:49.251640    1544 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0719 15:58:30.120713    1544 kapi.go:107] duration metric: took 6m0.008647875s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0719 15:58:30.121029    1544 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0719 15:58:30.144635    1544 kapi.go:107] duration metric: took 6m0.001552791s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0719 15:58:30.144676    1544 kapi.go:107] duration metric: took 6m0.011563667s to wait for kubernetes.io/minikube-addons=registry ...
	W0719 15:58:30.144755    1544 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	W0719 15:58:30.144795    1544 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0719 15:58:30.152624    1544 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, cloud-spanner, inspektor-gadget, default-storageclass, metrics-server, volumesnapshots, gcp-auth
	I0719 15:58:30.164713    1544 addons.go:502] enable addons completed in 6m0.066202625s: enabled=[ingress-dns storage-provisioner cloud-spanner inspektor-gadget default-storageclass metrics-server volumesnapshots gcp-auth]
	I0719 15:58:30.164763    1544 start.go:233] waiting for cluster config update ...
	I0719 15:58:30.164788    1544 start.go:242] writing updated cluster config ...
	I0719 15:58:30.169625    1544 ssh_runner.go:195] Run: rm -f paused
	I0719 15:58:30.237703    1544 start.go:578] kubectl: 1.27.2, cluster: 1.27.3 (minor skew: 0)
	I0719 15:58:30.241711    1544 out.go:177] * Done! kubectl is now configured to use "addons-101000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-07-19 22:51:58 UTC, ends at Wed 2023-07-19 23:24:25 UTC. --
	Jul 19 22:52:45 addons-101000 dockerd[1149]: time="2023-07-19T22:52:45.710423798Z" level=warning msg="reference for unknown type: " digest="sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Jul 19 22:52:48 addons-101000 cri-dockerd[1051]: time="2023-07-19T22:52:48Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Jul 19 22:52:48 addons-101000 dockerd[1155]: time="2023-07-19T22:52:48.502093407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 22:52:48 addons-101000 dockerd[1155]: time="2023-07-19T22:52:48.502123765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 22:52:48 addons-101000 dockerd[1155]: time="2023-07-19T22:52:48.502132147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 22:52:48 addons-101000 dockerd[1155]: time="2023-07-19T22:52:48.502138277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 23:10:37 addons-101000 dockerd[1155]: time="2023-07-19T23:10:37.526761438Z" level=info msg="shim disconnected" id=1916a4d50b456c96dde6cfb8c9a0592f96f7f3a97e32a9be366ec87324697279 namespace=moby
	Jul 19 23:10:37 addons-101000 dockerd[1155]: time="2023-07-19T23:10:37.526811146Z" level=warning msg="cleaning up after shim disconnected" id=1916a4d50b456c96dde6cfb8c9a0592f96f7f3a97e32a9be366ec87324697279 namespace=moby
	Jul 19 23:10:37 addons-101000 dockerd[1155]: time="2023-07-19T23:10:37.526818938Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 23:10:37 addons-101000 dockerd[1149]: time="2023-07-19T23:10:37.527016893Z" level=info msg="ignoring event" container=1916a4d50b456c96dde6cfb8c9a0592f96f7f3a97e32a9be366ec87324697279 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 23:10:37 addons-101000 dockerd[1155]: time="2023-07-19T23:10:37.596743616Z" level=info msg="shim disconnected" id=6777dc48b461a60a564d4c6d897d03aa240f0f002485e017fbb91565bbbc234a namespace=moby
	Jul 19 23:10:37 addons-101000 dockerd[1155]: time="2023-07-19T23:10:37.596774491Z" level=warning msg="cleaning up after shim disconnected" id=6777dc48b461a60a564d4c6d897d03aa240f0f002485e017fbb91565bbbc234a namespace=moby
	Jul 19 23:10:37 addons-101000 dockerd[1155]: time="2023-07-19T23:10:37.596779074Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 23:10:37 addons-101000 dockerd[1149]: time="2023-07-19T23:10:37.596907072Z" level=info msg="ignoring event" container=6777dc48b461a60a564d4c6d897d03aa240f0f002485e017fbb91565bbbc234a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 23:12:10 addons-101000 dockerd[1155]: time="2023-07-19T23:12:10.941118507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 23:12:10 addons-101000 dockerd[1155]: time="2023-07-19T23:12:10.941154798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 23:12:10 addons-101000 dockerd[1155]: time="2023-07-19T23:12:10.941371212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 23:12:10 addons-101000 dockerd[1155]: time="2023-07-19T23:12:10.941385587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 23:12:11 addons-101000 cri-dockerd[1051]: time="2023-07-19T23:12:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/97604dcd5e154149edefbf9219c003b4875f842d78dcc25aed66f8b4ef217365/resolv.conf as [nameserver 10.96.0.10 search headlamp.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 19 23:12:11 addons-101000 dockerd[1149]: time="2023-07-19T23:12:11.306418944Z" level=warning msg="reference for unknown type: " digest="sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45" remote="ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45"
	Jul 19 23:12:16 addons-101000 cri-dockerd[1051]: time="2023-07-19T23:12:16Z" level=info msg="Stop pulling image ghcr.io/headlamp-k8s/headlamp:v0.18.0@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45: Status: Downloaded newer image for ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45"
	Jul 19 23:12:16 addons-101000 dockerd[1155]: time="2023-07-19T23:12:16.589255166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 23:12:16 addons-101000 dockerd[1155]: time="2023-07-19T23:12:16.589620995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 23:12:16 addons-101000 dockerd[1155]: time="2023-07-19T23:12:16.589634120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 23:12:16 addons-101000 dockerd[1155]: time="2023-07-19T23:12:16.589638953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                     CREATED             STATE               NAME                         ATTEMPT             POD ID
	6912ff4c65e89       ghcr.io/headlamp-k8s/headlamp@sha256:67ba87b88218563eec9684525904936609713b02dcbcf4390cd055766217ed45                     12 minutes ago      Running             headlamp                     0                   97604dcd5e154
	1cc3491bfa533       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf              31 minutes ago      Running             gcp-auth                     0                   0ee1896a26320
	2b9a59faa57f0       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   31 minutes ago      Running             volume-snapshot-controller   0                   4d989e719f82a
	d47989aa362eb       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   31 minutes ago      Running             volume-snapshot-controller   0                   16bf9bed7271f
	0df05b2c74afc       97e04611ad434                                                                                                             31 minutes ago      Running             coredns                      0                   49ab25acb4281
	19588c52e552d       fb73e92641fd5                                                                                                             31 minutes ago      Running             kube-proxy                   0                   4d2bba9dbbd12
	f959c7f626d6e       24bc64e911039                                                                                                             32 minutes ago      Running             etcd                         0                   e9214702e68a5
	c6c632dd083f2       bcb9e554eaab6                                                                                                             32 minutes ago      Running             kube-scheduler               0                   956f93b928e2f
	862babcc9993e       ab3683b584ae5                                                                                                             32 minutes ago      Running             kube-controller-manager      0                   81af0dc9e0f17
	5984dda0d68af       39dfb036b0986                                                                                                             32 minutes ago      Running             kube-apiserver               0                   b8a23dc6dd212
	
	* 
	* ==> coredns [0df05b2c74af] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38035 - 43333 "HINFO IN 3178013197050500524.6871022848785512211. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004455439s
	[INFO] 10.244.0.9:44781 - 2820 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000096121s
	[INFO] 10.244.0.9:49458 - 31580 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000042327s
	[INFO] 10.244.0.9:47092 - 42379 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000038365s
	[INFO] 10.244.0.9:51547 - 25508 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000044495s
	[INFO] 10.244.0.9:33952 - 7053 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000043995s
	[INFO] 10.244.0.9:46272 - 9283 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000029482s
	[INFO] 10.244.0.9:44297 - 58626 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001146116s
	[INFO] 10.244.0.9:36915 - 33133 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001094699s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-101000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-101000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd270bde4b7e946c995f2329996587ae45fe53d4
	                    minikube.k8s.io/name=addons-101000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_19T15_52_16_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Jul 2023 22:52:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-101000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Jul 2023 23:24:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Jul 2023 23:22:33 +0000   Wed, 19 Jul 2023 22:52:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Jul 2023 23:22:33 +0000   Wed, 19 Jul 2023 22:52:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Jul 2023 23:22:33 +0000   Wed, 19 Jul 2023 22:52:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Jul 2023 23:22:33 +0000   Wed, 19 Jul 2023 22:52:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-101000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 06f7171a2e3b478b8a006d3ed11bcad4
	  System UUID:                06f7171a2e3b478b8a006d3ed11bcad4
	  Boot ID:                    388cb244-002c-43f0-bc4d-d5cefb6c596c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-58478865f7-hfg7x                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31m
	  headlamp                    headlamp-66f6498c69-gdc9w                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-5d78c9869d-knvd5                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     31m
	  kube-system                 etcd-addons-101000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         32m
	  kube-system                 kube-apiserver-addons-101000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32m
	  kube-system                 kube-controller-manager-addons-101000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32m
	  kube-system                 kube-proxy-jpdlk                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31m
	  kube-system                 kube-scheduler-addons-101000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32m
	  kube-system                 snapshot-controller-75bbb956b9-9qbz2     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31m
	  kube-system                 snapshot-controller-75bbb956b9-gsppf     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 31m                kube-proxy       
	  Normal  Starting                 32m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32m (x8 over 32m)  kubelet          Node addons-101000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32m (x8 over 32m)  kubelet          Node addons-101000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32m (x7 over 32m)  kubelet          Node addons-101000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  32m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 32m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  32m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  32m                kubelet          Node addons-101000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32m                kubelet          Node addons-101000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32m                kubelet          Node addons-101000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                32m                kubelet          Node addons-101000 status is now: NodeReady
	  Normal  RegisteredNode           31m                node-controller  Node addons-101000 event: Registered Node addons-101000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] KASLR disabled due to lack of seed
	[  +0.640919] EINJ: EINJ table not found.
	[  +0.493985] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.044020] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000805] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Jul19 22:52] systemd-fstab-generator[483]: Ignoring "noauto" for root device
	[  +0.066120] systemd-fstab-generator[494]: Ignoring "noauto" for root device
	[  +0.415432] systemd-fstab-generator[754]: Ignoring "noauto" for root device
	[  +0.174029] systemd-fstab-generator[789]: Ignoring "noauto" for root device
	[  +0.074321] systemd-fstab-generator[800]: Ignoring "noauto" for root device
	[  +0.082963] systemd-fstab-generator[813]: Ignoring "noauto" for root device
	[  +1.234248] systemd-fstab-generator[971]: Ignoring "noauto" for root device
	[  +0.083333] systemd-fstab-generator[982]: Ignoring "noauto" for root device
	[  +0.083834] systemd-fstab-generator[993]: Ignoring "noauto" for root device
	[  +0.077345] systemd-fstab-generator[1004]: Ignoring "noauto" for root device
	[  +0.087881] systemd-fstab-generator[1044]: Ignoring "noauto" for root device
	[  +2.511224] systemd-fstab-generator[1142]: Ignoring "noauto" for root device
	[  +2.197027] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.212424] systemd-fstab-generator[1454]: Ignoring "noauto" for root device
	[  +5.140733] systemd-fstab-generator[2321]: Ignoring "noauto" for root device
	[ +14.969880] kauditd_printk_skb: 41 callbacks suppressed
	[  +1.331585] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +4.144611] kauditd_printk_skb: 47 callbacks suppressed
	[  +7.075011] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.120561] kauditd_printk_skb: 3 callbacks suppressed
	
	* 
	* ==> etcd [f959c7f626d6] <==
	* {"level":"info","ts":"2023-07-19T22:52:13.558Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-19T22:52:13.558Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-19T22:52:13.558Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-19T22:52:13.558Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-19T22:52:13.558Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-19T22:52:13.559Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-07-19T22:52:13.559Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-19T22:52:13.559Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-19T22:52:13.559Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-19T22:52:13.559Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-19T23:02:13.581Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":796}
	{"level":"info","ts":"2023-07-19T23:02:13.583Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":796,"took":"1.901103ms","hash":1438469743}
	{"level":"info","ts":"2023-07-19T23:02:13.583Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1438469743,"revision":796,"compact-revision":-1}
	{"level":"info","ts":"2023-07-19T23:07:13.591Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":947}
	{"level":"info","ts":"2023-07-19T23:07:13.593Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":947,"took":"1.234408ms","hash":3746673681}
	{"level":"info","ts":"2023-07-19T23:07:13.593Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3746673681,"revision":947,"compact-revision":796}
	{"level":"info","ts":"2023-07-19T23:12:13.596Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1098}
	{"level":"info","ts":"2023-07-19T23:12:13.597Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1098,"took":"538.451µs","hash":2203992484}
	{"level":"info","ts":"2023-07-19T23:12:13.597Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2203992484,"revision":1098,"compact-revision":947}
	{"level":"info","ts":"2023-07-19T23:17:13.605Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1304}
	{"level":"info","ts":"2023-07-19T23:17:13.608Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1304,"took":"1.459903ms","hash":3587792068}
	{"level":"info","ts":"2023-07-19T23:17:13.608Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3587792068,"revision":1304,"compact-revision":1098}
	{"level":"info","ts":"2023-07-19T23:22:13.612Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1466}
	{"level":"info","ts":"2023-07-19T23:22:13.615Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1466,"took":"1.386447ms","hash":1073438637}
	{"level":"info","ts":"2023-07-19T23:22:13.615Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1073438637,"revision":1466,"compact-revision":1304}
	
	* 
	* ==> gcp-auth [1cc3491bfa53] <==
	* 2023/07/19 22:52:48 GCP Auth Webhook started!
	2023/07/19 23:12:10 Ready to marshal response ...
	2023/07/19 23:12:10 Ready to write response ...
	2023/07/19 23:12:10 Ready to marshal response ...
	2023/07/19 23:12:10 Ready to write response ...
	2023/07/19 23:12:10 Ready to marshal response ...
	2023/07/19 23:12:10 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  23:24:25 up 32 min,  0 users,  load average: 0.37, 0.35, 0.32
	Linux addons-101000 5.10.57 #1 SMP PREEMPT Fri Jul 14 22:49:12 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [5984dda0d68a] <==
	* I0719 23:12:14.240585       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 23:12:14.243887       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 23:12:14.243904       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0719 23:13:38.208145       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0719 23:13:38.208225       1 handler_proxy.go:100] no RequestInfo found in the context
	E0719 23:13:38.208305       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 23:13:38.208326       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0719 23:17:14.245472       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 23:17:14.245567       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 23:17:14.248818       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 23:17:14.248863       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 23:17:14.250212       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 23:17:14.250271       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0719 23:17:38.208543       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0719 23:17:38.208619       1 handler_proxy.go:100] no RequestInfo found in the context
	E0719 23:17:38.208697       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 23:17:38.209061       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0719 23:22:14.243466       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 23:22:14.243506       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 23:22:14.246827       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 23:22:14.246860       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 23:22:14.247226       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 23:22:14.247294       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [862babcc9993] <==
	* I0719 23:21:14.122867       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0719 23:21:29.123760       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0719 23:21:29.123788       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0719 23:21:44.126149       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0719 23:21:44.126283       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0719 23:21:59.127033       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0719 23:21:59.127247       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0719 23:22:14.128083       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0719 23:22:14.128262       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0719 23:22:29.128662       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0719 23:22:29.128716       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0719 23:22:44.129293       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0719 23:22:44.129441       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0719 23:22:59.129774       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0719 23:22:59.130262       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0719 23:23:14.129868       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0719 23:23:14.130266       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0719 23:23:29.130018       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0719 23:23:29.130071       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0719 23:23:44.131183       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0719 23:23:44.131598       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0719 23:23:59.132504       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0719 23:23:59.133095       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0719 23:24:14.132809       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0719 23:24:14.133127       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	
	* 
	* ==> kube-proxy [19588c52e552] <==
	* I0719 22:52:31.940544       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0719 22:52:31.940603       1 server_others.go:110] "Detected node IP" address="192.168.105.2"
	I0719 22:52:31.940629       1 server_others.go:554] "Using iptables proxy"
	I0719 22:52:31.978237       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0719 22:52:31.978247       1 server_others.go:192] "Using iptables Proxier"
	I0719 22:52:31.978272       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 22:52:31.978547       1 server.go:658] "Version info" version="v1.27.3"
	I0719 22:52:31.978554       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 22:52:31.980358       1 config.go:188] "Starting service config controller"
	I0719 22:52:31.980401       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0719 22:52:31.980465       1 config.go:97] "Starting endpoint slice config controller"
	I0719 22:52:31.980482       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0719 22:52:31.980953       1 config.go:315] "Starting node config controller"
	I0719 22:52:31.980984       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0719 22:52:32.081146       1 shared_informer.go:318] Caches are synced for node config
	I0719 22:52:32.081155       1 shared_informer.go:318] Caches are synced for service config
	I0719 22:52:32.081163       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [c6c632dd083f] <==
	* W0719 22:52:14.250631       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 22:52:14.250639       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 22:52:14.250680       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 22:52:14.250689       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 22:52:14.250732       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 22:52:14.250739       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0719 22:52:14.250771       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 22:52:14.250778       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 22:52:14.250830       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 22:52:14.250837       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 22:52:14.250850       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 22:52:14.250875       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0719 22:52:14.250933       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 22:52:14.250977       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0719 22:52:14.251012       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 22:52:14.251046       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 22:52:14.251076       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 22:52:14.251088       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 22:52:15.080697       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 22:52:15.080736       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 22:52:15.086216       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 22:52:15.086240       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0719 22:52:15.262476       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 22:52:15.262526       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0719 22:52:15.546346       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-07-19 22:51:58 UTC, ends at Wed 2023-07-19 23:24:25 UTC. --
	Jul 19 23:19:16 addons-101000 kubelet[2341]: E0719 23:19:16.803309    2341 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 19 23:19:16 addons-101000 kubelet[2341]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 23:19:16 addons-101000 kubelet[2341]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 23:19:16 addons-101000 kubelet[2341]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 19 23:20:16 addons-101000 kubelet[2341]: E0719 23:20:16.802276    2341 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 19 23:20:16 addons-101000 kubelet[2341]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 23:20:16 addons-101000 kubelet[2341]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 23:20:16 addons-101000 kubelet[2341]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 19 23:21:16 addons-101000 kubelet[2341]: E0719 23:21:16.804232    2341 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 19 23:21:16 addons-101000 kubelet[2341]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 23:21:16 addons-101000 kubelet[2341]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 23:21:16 addons-101000 kubelet[2341]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 19 23:22:16 addons-101000 kubelet[2341]: W0719 23:22:16.781067    2341 machine.go:65] Cannot read vendor id correctly, set empty.
	Jul 19 23:22:16 addons-101000 kubelet[2341]: E0719 23:22:16.802900    2341 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 19 23:22:16 addons-101000 kubelet[2341]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 23:22:16 addons-101000 kubelet[2341]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 23:22:16 addons-101000 kubelet[2341]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 19 23:23:16 addons-101000 kubelet[2341]: E0719 23:23:16.802692    2341 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 19 23:23:16 addons-101000 kubelet[2341]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 23:23:16 addons-101000 kubelet[2341]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 23:23:16 addons-101000 kubelet[2341]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 19 23:24:16 addons-101000 kubelet[2341]: E0719 23:24:16.800145    2341 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 19 23:24:16 addons-101000 kubelet[2341]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 23:24:16 addons-101000 kubelet[2341]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 23:24:16 addons-101000 kubelet[2341]:  > table=nat chain=KUBE-KUBELET-CANARY
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-101000 -n addons-101000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-101000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/CSI FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/CSI (720.91s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (819.15s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:831: failed waiting for cloud-spanner-emulator deployment to stabilize: timed out waiting for the condition
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
addons_test.go:833: ***** TestAddons/parallel/CloudSpanner: pod "app=cloud-spanner-emulator" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:833: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-101000 -n addons-101000
addons_test.go:833: TestAddons/parallel/CloudSpanner: showing logs for failed pods as of 2023-07-19 16:10:30.385515 -0700 PDT m=+1158.379205751
addons_test.go:834: failed waiting for app=cloud-spanner-emulator pod: app=cloud-spanner-emulator within 6m0s: context deadline exceeded
addons_test.go:836: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-101000
addons_test.go:836: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-101000: exit status 10 (1m38.305214625s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE: disable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/deployment.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/deployment.yaml" does not exist
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:837: failed to disable cloud-spanner addon: args "out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-101000" : exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-101000 -n addons-101000
helpers_test.go:244: <<< TestAddons/parallel/CloudSpanner FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CloudSpanner]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-101000 logs -n 25
helpers_test.go:252: TestAddons/parallel/CloudSpanner logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-744000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT |                     |
	|         | -p download-only-744000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-744000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT |                     |
	|         | -p download-only-744000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT | 19 Jul 23 15:51 PDT |
	| delete  | -p download-only-744000        | download-only-744000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT | 19 Jul 23 15:51 PDT |
	| delete  | -p download-only-744000        | download-only-744000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT | 19 Jul 23 15:51 PDT |
	| start   | --download-only -p             | binary-mirror-101000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT |                     |
	|         | binary-mirror-101000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49310         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-101000        | binary-mirror-101000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT | 19 Jul 23 15:51 PDT |
	| start   | -p addons-101000               | addons-101000        | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT | 19 Jul 23 15:58 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-101000        | jenkins | v1.31.0 | 19 Jul 23 16:10 PDT |                     |
	|         | addons-101000                  |                      |         |         |                     |                     |
	| addons  | addons-101000 addons           | addons-101000        | jenkins | v1.31.0 | 19 Jul 23 16:10 PDT | 19 Jul 23 16:10 PDT |
	|         | disable metrics-server         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/19 15:51:46
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 15:51:46.194297    1544 out.go:296] Setting OutFile to fd 1 ...
	I0719 15:51:46.194422    1544 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 15:51:46.194425    1544 out.go:309] Setting ErrFile to fd 2...
	I0719 15:51:46.194428    1544 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 15:51:46.194533    1544 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 15:51:46.195601    1544 out.go:303] Setting JSON to false
	I0719 15:51:46.210650    1544 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1277,"bootTime":1689805829,"procs":414,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 15:51:46.210718    1544 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 15:51:46.215551    1544 out.go:177] * [addons-101000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 15:51:46.222492    1544 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 15:51:46.226347    1544 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 15:51:46.222559    1544 notify.go:220] Checking for updates...
	I0719 15:51:46.229495    1544 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 15:51:46.232495    1544 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 15:51:46.235514    1544 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 15:51:46.238453    1544 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 15:51:46.241624    1544 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 15:51:46.245483    1544 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 15:51:46.252546    1544 start.go:298] selected driver: qemu2
	I0719 15:51:46.252550    1544 start.go:880] validating driver "qemu2" against <nil>
	I0719 15:51:46.252555    1544 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 15:51:46.254378    1544 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0719 15:51:46.257444    1544 out.go:177] * Automatically selected the socket_vmnet network
	I0719 15:51:46.260545    1544 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 15:51:46.260571    1544 cni.go:84] Creating CNI manager for ""
	I0719 15:51:46.260577    1544 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 15:51:46.260582    1544 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 15:51:46.260588    1544 start_flags.go:319] config:
	{Name:addons-101000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-101000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:c
ni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 15:51:46.264618    1544 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:51:46.272375    1544 out.go:177] * Starting control plane node addons-101000 in cluster addons-101000
	I0719 15:51:46.276444    1544 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 15:51:46.276475    1544 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 15:51:46.276490    1544 cache.go:57] Caching tarball of preloaded images
	I0719 15:51:46.276551    1544 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 15:51:46.276557    1544 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0719 15:51:46.276783    1544 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/config.json ...
	I0719 15:51:46.276796    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/config.json: {Name:mk5e2042adc5d3df20329816c5917e6964724b56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:51:46.277018    1544 start.go:365] acquiring machines lock for addons-101000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 15:51:46.277112    1544 start.go:369] acquired machines lock for "addons-101000" in 87.917µs
	I0719 15:51:46.277123    1544 start.go:93] Provisioning new machine with config: &{Name:addons-101000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-101000
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 15:51:46.277150    1544 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 15:51:46.284454    1544 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0719 15:51:46.605350    1544 start.go:159] libmachine.API.Create for "addons-101000" (driver="qemu2")
	I0719 15:51:46.605388    1544 client.go:168] LocalClient.Create starting
	I0719 15:51:46.605532    1544 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 15:51:46.811960    1544 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 15:51:46.895754    1544 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 15:51:47.292066    1544 main.go:141] libmachine: Creating SSH key...
	I0719 15:51:47.506919    1544 main.go:141] libmachine: Creating Disk image...
	I0719 15:51:47.506931    1544 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 15:51:47.507226    1544 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/disk.qcow2
	I0719 15:51:47.541355    1544 main.go:141] libmachine: STDOUT: 
	I0719 15:51:47.541387    1544 main.go:141] libmachine: STDERR: 
	I0719 15:51:47.541456    1544 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/disk.qcow2 +20000M
	I0719 15:51:47.548773    1544 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 15:51:47.548786    1544 main.go:141] libmachine: STDERR: 
	I0719 15:51:47.548801    1544 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/disk.qcow2
	I0719 15:51:47.548806    1544 main.go:141] libmachine: Starting QEMU VM...
	I0719 15:51:47.548843    1544 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:3a:02:96:05:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/disk.qcow2
	I0719 15:51:47.614276    1544 main.go:141] libmachine: STDOUT: 
	I0719 15:51:47.614314    1544 main.go:141] libmachine: STDERR: 
	I0719 15:51:47.614319    1544 main.go:141] libmachine: Attempt 0
	I0719 15:51:47.614337    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:49.616493    1544 main.go:141] libmachine: Attempt 1
	I0719 15:51:49.616586    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:51.618782    1544 main.go:141] libmachine: Attempt 2
	I0719 15:51:51.618840    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:53.620905    1544 main.go:141] libmachine: Attempt 3
	I0719 15:51:53.620918    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:55.622923    1544 main.go:141] libmachine: Attempt 4
	I0719 15:51:55.622935    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:57.624701    1544 main.go:141] libmachine: Attempt 5
	I0719 15:51:57.624722    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:59.626797    1544 main.go:141] libmachine: Attempt 6
	I0719 15:51:59.626823    1544 main.go:141] libmachine: Searching for 36:3a:2:96:5:da in /var/db/dhcpd_leases ...
	I0719 15:51:59.626967    1544 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0719 15:51:59.626997    1544 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:3a:2:96:5:da ID:1,36:3a:2:96:5:da Lease:0x64b9ba8e}
	I0719 15:51:59.627004    1544 main.go:141] libmachine: Found match: 36:3a:2:96:5:da
	I0719 15:51:59.627013    1544 main.go:141] libmachine: IP: 192.168.105.2
	I0719 15:51:59.627019    1544 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0719 15:52:01.648988    1544 machine.go:88] provisioning docker machine ...
	I0719 15:52:01.649049    1544 buildroot.go:166] provisioning hostname "addons-101000"
	I0719 15:52:01.650570    1544 main.go:141] libmachine: Using SSH client type: native
	I0719 15:52:01.651376    1544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10258d170] 0x10258fbd0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0719 15:52:01.651395    1544 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-101000 && echo "addons-101000" | sudo tee /etc/hostname
	I0719 15:52:01.738896    1544 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-101000
	
	I0719 15:52:01.739019    1544 main.go:141] libmachine: Using SSH client type: native
	I0719 15:52:01.739522    1544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10258d170] 0x10258fbd0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0719 15:52:01.739541    1544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-101000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-101000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-101000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 15:52:01.809650    1544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 15:52:01.809665    1544 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15585-1056/.minikube CaCertPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15585-1056/.minikube}
	I0719 15:52:01.809675    1544 buildroot.go:174] setting up certificates
	I0719 15:52:01.809702    1544 provision.go:83] configureAuth start
	I0719 15:52:01.809710    1544 provision.go:138] copyHostCerts
	I0719 15:52:01.809873    1544 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15585-1056/.minikube/key.pem (1675 bytes)
	I0719 15:52:01.810191    1544 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.pem (1082 bytes)
	I0719 15:52:01.810327    1544 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15585-1056/.minikube/cert.pem (1123 bytes)
	I0719 15:52:01.810465    1544 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca-key.pem org=jenkins.addons-101000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-101000]
	I0719 15:52:01.879682    1544 provision.go:172] copyRemoteCerts
	I0719 15:52:01.879750    1544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 15:52:01.879766    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:01.912803    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 15:52:01.919846    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0719 15:52:01.926696    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 15:52:01.934065    1544 provision.go:86] duration metric: configureAuth took 124.357417ms
	I0719 15:52:01.934074    1544 buildroot.go:189] setting minikube options for container-runtime
	I0719 15:52:01.934167    1544 config.go:182] Loaded profile config "addons-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 15:52:01.934205    1544 main.go:141] libmachine: Using SSH client type: native
	I0719 15:52:01.934418    1544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10258d170] 0x10258fbd0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0719 15:52:01.934423    1544 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 15:52:01.991251    1544 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 15:52:01.991259    1544 buildroot.go:70] root file system type: tmpfs
	I0719 15:52:01.991320    1544 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 15:52:01.991364    1544 main.go:141] libmachine: Using SSH client type: native
	I0719 15:52:01.991596    1544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10258d170] 0x10258fbd0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0719 15:52:01.991636    1544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 15:52:02.049859    1544 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 15:52:02.049895    1544 main.go:141] libmachine: Using SSH client type: native
	I0719 15:52:02.050139    1544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10258d170] 0x10258fbd0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0719 15:52:02.050148    1544 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 15:52:02.386931    1544 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 15:52:02.386943    1544 machine.go:91] provisioned docker machine in 737.934792ms
	I0719 15:52:02.386948    1544 client.go:171] LocalClient.Create took 15.78172825s
	I0719 15:52:02.386964    1544 start.go:167] duration metric: libmachine.API.Create for "addons-101000" took 15.781794083s
	I0719 15:52:02.386972    1544 start.go:300] post-start starting for "addons-101000" (driver="qemu2")
	I0719 15:52:02.386977    1544 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 15:52:02.387049    1544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 15:52:02.387060    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:02.417331    1544 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 15:52:02.418797    1544 info.go:137] Remote host: Buildroot 2021.02.12
	I0719 15:52:02.418804    1544 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15585-1056/.minikube/addons for local assets ...
	I0719 15:52:02.418866    1544 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15585-1056/.minikube/files for local assets ...
	I0719 15:52:02.418892    1544 start.go:303] post-start completed in 31.917459ms
	I0719 15:52:02.419241    1544 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/config.json ...
	I0719 15:52:02.419386    1544 start.go:128] duration metric: createHost completed in 16.142407666s
	I0719 15:52:02.419424    1544 main.go:141] libmachine: Using SSH client type: native
	I0719 15:52:02.419636    1544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10258d170] 0x10258fbd0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0719 15:52:02.419640    1544 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 15:52:02.473900    1544 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689807122.468027460
	
	I0719 15:52:02.473908    1544 fix.go:206] guest clock: 1689807122.468027460
	I0719 15:52:02.473913    1544 fix.go:219] Guest: 2023-07-19 15:52:02.46802746 -0700 PDT Remote: 2023-07-19 15:52:02.419389 -0700 PDT m=+16.243794293 (delta=48.63846ms)
	I0719 15:52:02.473924    1544 fix.go:190] guest clock delta is within tolerance: 48.63846ms
	I0719 15:52:02.473927    1544 start.go:83] releasing machines lock for "addons-101000", held for 16.196985625s
	I0719 15:52:02.474254    1544 ssh_runner.go:195] Run: cat /version.json
	I0719 15:52:02.474266    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:02.474283    1544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 15:52:02.474307    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:02.504138    1544 ssh_runner.go:195] Run: systemctl --version
	I0719 15:52:02.506807    1544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 15:52:02.547286    1544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 15:52:02.547337    1544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 15:52:02.552537    1544 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 15:52:02.552544    1544 start.go:466] detecting cgroup driver to use...
	I0719 15:52:02.552636    1544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:52:02.558292    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 15:52:02.561659    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 15:52:02.565184    1544 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 15:52:02.565214    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 15:52:02.568252    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 15:52:02.571117    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 15:52:02.574394    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 15:52:02.578183    1544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 15:52:02.581670    1544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 15:52:02.585324    1544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 15:52:02.588146    1544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 15:52:02.590822    1544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:02.668293    1544 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 15:52:02.674070    1544 start.go:466] detecting cgroup driver to use...
	I0719 15:52:02.674127    1544 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 15:52:02.680608    1544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:52:02.685038    1544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 15:52:02.692942    1544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 15:52:02.697559    1544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 15:52:02.702616    1544 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 15:52:02.743950    1544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 15:52:02.749347    1544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 15:52:02.754962    1544 ssh_runner.go:195] Run: which cri-dockerd
	I0719 15:52:02.756311    1544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 15:52:02.758805    1544 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 15:52:02.763719    1544 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 15:52:02.840623    1544 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 15:52:02.914223    1544 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 15:52:02.914238    1544 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0719 15:52:02.919565    1544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:02.997357    1544 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 15:52:04.154714    1544 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.157354834s)
	I0719 15:52:04.154783    1544 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 15:52:04.233603    1544 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 15:52:04.316105    1544 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 15:52:04.398081    1544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:04.476954    1544 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 15:52:04.483791    1544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:04.563625    1544 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0719 15:52:04.586684    1544 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 15:52:04.586782    1544 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 15:52:04.589057    1544 start.go:534] Will wait 60s for crictl version
	I0719 15:52:04.589109    1544 ssh_runner.go:195] Run: which crictl
	I0719 15:52:04.590411    1544 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 15:52:04.605474    1544 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0719 15:52:04.605558    1544 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 15:52:04.615366    1544 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 15:52:04.635830    1544 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.4 ...
	I0719 15:52:04.635983    1544 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0719 15:52:04.637460    1544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:52:04.641595    1544 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 15:52:04.641637    1544 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 15:52:04.650900    1544 docker.go:636] Got preloaded images: 
	I0719 15:52:04.650907    1544 docker.go:642] registry.k8s.io/kube-apiserver:v1.27.3 wasn't preloaded
	I0719 15:52:04.650940    1544 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 15:52:04.654168    1544 ssh_runner.go:195] Run: which lz4
	I0719 15:52:04.655512    1544 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 15:52:04.656801    1544 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 15:52:04.656815    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635922 bytes)
	I0719 15:52:05.915036    1544 docker.go:600] Took 1.259592 seconds to copy over tarball
	I0719 15:52:05.915105    1544 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 15:52:06.974144    1544 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.059035792s)
	I0719 15:52:06.974158    1544 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 15:52:06.989746    1544 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 15:52:06.993185    1544 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0719 15:52:06.998174    1544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 15:52:07.075272    1544 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 15:52:09.295448    1544 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.220180667s)
	I0719 15:52:09.295552    1544 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 15:52:09.301832    1544 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0719 15:52:09.301841    1544 cache_images.go:84] Images are preloaded, skipping loading
	I0719 15:52:09.301925    1544 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0719 15:52:09.309268    1544 cni.go:84] Creating CNI manager for ""
	I0719 15:52:09.309280    1544 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 15:52:09.309309    1544 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0719 15:52:09.309319    1544 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-101000 NodeName:addons-101000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 15:52:09.309384    1544 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-101000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 15:52:09.309419    1544 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-101000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:addons-101000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0719 15:52:09.309476    1544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0719 15:52:09.312676    1544 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 15:52:09.312711    1544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 15:52:09.315418    1544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0719 15:52:09.320330    1544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 15:52:09.325480    1544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0719 15:52:09.330778    1544 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0719 15:52:09.332164    1544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 15:52:09.335590    1544 certs.go:56] Setting up /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000 for IP: 192.168.105.2
	I0719 15:52:09.335612    1544 certs.go:190] acquiring lock for shared ca certs: {Name:mk57268b94adc82cb06ba056d8f0acecf538b87f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.335779    1544 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.key
	I0719 15:52:09.375531    1544 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.crt ...
	I0719 15:52:09.375537    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.crt: {Name:mk18dc73651ebb7586f5cc870528fe59bb3eaca5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.375716    1544 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.key ...
	I0719 15:52:09.375718    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.key: {Name:mkf4847c0170d0ed2e02012567d5849b7cdc3e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.375829    1544 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.key
	I0719 15:52:09.479964    1544 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.crt ...
	I0719 15:52:09.479968    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.crt: {Name:mk931f43b9aeac1a637bc02f03d26df5c2c21559 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.480104    1544 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.key ...
	I0719 15:52:09.480107    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.key: {Name:mkbbced5a2200a63ea6918cadfce8d25c9e09696 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.480228    1544 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.key
	I0719 15:52:09.480236    1544 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt with IP's: []
	I0719 15:52:09.550153    1544 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt ...
	I0719 15:52:09.550157    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt: {Name:mkfbd0ec0d392f0ad08f01bd61787ea0a90ba52c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.550273    1544 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.key ...
	I0719 15:52:09.550276    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.key: {Name:mk8c74eed8437e78eaa33e4b6b240669ae86a824 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.550378    1544 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.key.96055969
	I0719 15:52:09.550392    1544 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0719 15:52:09.700054    1544 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.crt.96055969 ...
	I0719 15:52:09.700063    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.crt.96055969: {Name:mkf03671886dbbbb632ec2e172f912e064d8e1ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.700299    1544 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.key.96055969 ...
	I0719 15:52:09.700303    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.key.96055969: {Name:mk303c08d6b543a4cd38e9de14800a408d1d2869 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.700416    1544 certs.go:337] copying /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.crt
	I0719 15:52:09.700598    1544 certs.go:341] copying /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.key
	I0719 15:52:09.700687    1544 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.key
	I0719 15:52:09.700696    1544 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.crt with IP's: []
	I0719 15:52:09.741490    1544 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.crt ...
	I0719 15:52:09.741493    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.crt: {Name:mk78bf5cf7588d5f6faf8ac273455bded2325b7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.741610    1544 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.key ...
	I0719 15:52:09.741617    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.key: {Name:mkd247533695e3682be8e4d6fb67fe0e52efd3f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:09.741840    1544 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 15:52:09.741864    1544 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem (1082 bytes)
	I0719 15:52:09.741886    1544 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem (1123 bytes)
	I0719 15:52:09.741911    1544 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/key.pem (1675 bytes)
	I0719 15:52:09.742194    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0719 15:52:09.749753    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0719 15:52:09.756925    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 15:52:09.764081    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 15:52:09.770653    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 15:52:09.777938    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 15:52:09.785429    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 15:52:09.792823    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 15:52:09.799580    1544 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 15:52:09.806238    1544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 15:52:09.812452    1544 ssh_runner.go:195] Run: openssl version
	I0719 15:52:09.814304    1544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 15:52:09.817773    1544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:52:09.819362    1544 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 19 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:52:09.819381    1544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 15:52:09.821228    1544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 15:52:09.824165    1544 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0719 15:52:09.825567    1544 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0719 15:52:09.825608    1544 kubeadm.go:404] StartCluster: {Name:addons-101000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-101000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 15:52:09.825669    1544 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 15:52:09.831218    1544 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 15:52:09.834469    1544 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 15:52:09.837516    1544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 15:52:09.840473    1544 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 15:52:09.840487    1544 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 15:52:09.861549    1544 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0719 15:52:09.861581    1544 kubeadm.go:322] [preflight] Running pre-flight checks
	I0719 15:52:09.914251    1544 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 15:52:09.914308    1544 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 15:52:09.914371    1544 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 15:52:09.979760    1544 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 15:52:09.988950    1544 out.go:204]   - Generating certificates and keys ...
	I0719 15:52:09.988981    1544 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0719 15:52:09.989014    1544 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0719 15:52:10.135496    1544 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 15:52:10.224881    1544 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0719 15:52:10.328051    1544 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0719 15:52:10.529629    1544 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0719 15:52:10.721019    1544 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0719 15:52:10.721090    1544 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-101000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0719 15:52:10.787563    1544 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0719 15:52:10.787619    1544 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-101000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0719 15:52:10.835004    1544 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 15:52:10.905361    1544 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 15:52:10.998864    1544 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0719 15:52:10.998890    1544 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 15:52:11.030652    1544 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 15:52:11.128642    1544 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 15:52:11.289310    1544 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 15:52:11.400745    1544 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 15:52:11.407437    1544 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 15:52:11.407496    1544 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 15:52:11.407517    1544 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0719 15:52:11.489012    1544 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 15:52:11.495210    1544 out.go:204]   - Booting up control plane ...
	I0719 15:52:11.495279    1544 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 15:52:11.495324    1544 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 15:52:11.495356    1544 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 15:52:11.495394    1544 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 15:52:11.496331    1544 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 15:52:15.497794    1544 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.001180 seconds
	I0719 15:52:15.497922    1544 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 15:52:15.503484    1544 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 15:52:16.021124    1544 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 15:52:16.021263    1544 kubeadm.go:322] [mark-control-plane] Marking the node addons-101000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 15:52:16.527825    1544 kubeadm.go:322] [bootstrap-token] Using token: za2ad5.mzmzgft4t0cdmv0r
	I0719 15:52:16.534544    1544 out.go:204]   - Configuring RBAC rules ...
	I0719 15:52:16.534604    1544 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 15:52:16.535903    1544 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 15:52:16.539838    1544 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 15:52:16.541602    1544 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 15:52:16.542999    1544 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 15:52:16.544402    1544 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 15:52:16.549043    1544 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 15:52:16.720317    1544 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0719 15:52:16.937953    1544 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0719 15:52:16.938484    1544 kubeadm.go:322] 
	I0719 15:52:16.938519    1544 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0719 15:52:16.938527    1544 kubeadm.go:322] 
	I0719 15:52:16.938563    1544 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0719 15:52:16.938588    1544 kubeadm.go:322] 
	I0719 15:52:16.938601    1544 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0719 15:52:16.938634    1544 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 15:52:16.938667    1544 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 15:52:16.938672    1544 kubeadm.go:322] 
	I0719 15:52:16.938695    1544 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0719 15:52:16.938698    1544 kubeadm.go:322] 
	I0719 15:52:16.938723    1544 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 15:52:16.938730    1544 kubeadm.go:322] 
	I0719 15:52:16.938754    1544 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0719 15:52:16.938790    1544 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 15:52:16.938841    1544 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 15:52:16.938844    1544 kubeadm.go:322] 
	I0719 15:52:16.938885    1544 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 15:52:16.938940    1544 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0719 15:52:16.938947    1544 kubeadm.go:322] 
	I0719 15:52:16.938990    1544 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token za2ad5.mzmzgft4t0cdmv0r \
	I0719 15:52:16.939068    1544 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ac94190d213987345f5634b8d3500a45c06dcf10dacd514b79727b35e18f9f90 \
	I0719 15:52:16.939079    1544 kubeadm.go:322] 	--control-plane 
	I0719 15:52:16.939082    1544 kubeadm.go:322] 
	I0719 15:52:16.939124    1544 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0719 15:52:16.939129    1544 kubeadm.go:322] 
	I0719 15:52:16.939171    1544 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token za2ad5.mzmzgft4t0cdmv0r \
	I0719 15:52:16.939230    1544 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ac94190d213987345f5634b8d3500a45c06dcf10dacd514b79727b35e18f9f90 
	I0719 15:52:16.939287    1544 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 15:52:16.939293    1544 cni.go:84] Creating CNI manager for ""
	I0719 15:52:16.939300    1544 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 15:52:16.943663    1544 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 15:52:16.946263    1544 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 15:52:16.949272    1544 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0719 15:52:16.953970    1544 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 15:52:16.954045    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:16.954043    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=fd270bde4b7e946c995f2329996587ae45fe53d4 minikube.k8s.io/name=addons-101000 minikube.k8s.io/updated_at=2023_07_19T15_52_16_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:17.018155    1544 ops.go:34] apiserver oom_adj: -16
	I0719 15:52:17.018193    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:17.554061    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:18.053987    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:18.552765    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:19.054043    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:19.552096    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:20.054238    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:20.554269    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:21.054238    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:21.554235    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:22.054268    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:22.554252    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:23.054226    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:23.554175    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:24.054171    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:24.553977    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:25.054181    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:25.553942    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:26.053975    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:26.553905    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:27.053866    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:27.553341    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:28.052551    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:28.553332    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:29.053858    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:29.553880    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:30.052491    1544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 15:52:30.101801    1544 kubeadm.go:1081] duration metric: took 13.147921333s to wait for elevateKubeSystemPrivileges.
	I0719 15:52:30.101816    1544 kubeadm.go:406] StartCluster complete in 20.276428833s
	I0719 15:52:30.101824    1544 settings.go:142] acquiring lock: {Name:mk58631521ffd49c3231a31589bcae3549c3b53c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:30.101973    1544 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 15:52:30.102164    1544 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/kubeconfig: {Name:mk508b0ad49e7803e8fd5dcb96b45d1248a097b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:52:30.102360    1544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0719 15:52:30.102400    1544 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0719 15:52:30.102441    1544 addons.go:69] Setting volumesnapshots=true in profile "addons-101000"
	I0719 15:52:30.102447    1544 addons.go:231] Setting addon volumesnapshots=true in "addons-101000"
	I0719 15:52:30.102465    1544 addons.go:69] Setting metrics-server=true in profile "addons-101000"
	I0719 15:52:30.102475    1544 addons.go:231] Setting addon metrics-server=true in "addons-101000"
	I0719 15:52:30.102477    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102506    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102510    1544 addons.go:69] Setting ingress=true in profile "addons-101000"
	I0719 15:52:30.102533    1544 addons.go:69] Setting ingress-dns=true in profile "addons-101000"
	I0719 15:52:30.102557    1544 addons.go:231] Setting addon ingress=true in "addons-101000"
	I0719 15:52:30.102537    1544 addons.go:69] Setting inspektor-gadget=true in profile "addons-101000"
	I0719 15:52:30.102572    1544 addons.go:231] Setting addon inspektor-gadget=true in "addons-101000"
	I0719 15:52:30.102587    1544 addons.go:69] Setting registry=true in profile "addons-101000"
	I0719 15:52:30.102609    1544 addons.go:231] Setting addon registry=true in "addons-101000"
	I0719 15:52:30.102627    1544 addons.go:231] Setting addon ingress-dns=true in "addons-101000"
	I0719 15:52:30.102650    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102671    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102679    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102685    1544 config.go:182] Loaded profile config "addons-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 15:52:30.102713    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102729    1544 addons.go:69] Setting storage-provisioner=true in profile "addons-101000"
	I0719 15:52:30.102734    1544 addons.go:231] Setting addon storage-provisioner=true in "addons-101000"
	I0719 15:52:30.102749    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102918    1544 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-101000"
	I0719 15:52:30.102930    1544 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-101000"
	I0719 15:52:30.102932    1544 addons.go:69] Setting cloud-spanner=true in profile "addons-101000"
	I0719 15:52:30.102942    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.102946    1544 addons.go:231] Setting addon cloud-spanner=true in "addons-101000"
	I0719 15:52:30.102975    1544 host.go:66] Checking if "addons-101000" exists ...
	W0719 15:52:30.103028    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	W0719 15:52:30.103039    1544 addons.go:277] "addons-101000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	I0719 15:52:30.103042    1544 addons.go:467] Verifying addon ingress=true in "addons-101000"
	I0719 15:52:30.106383    1544 out.go:177] * Verifying ingress addon...
	W0719 15:52:30.103028    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	W0719 15:52:30.103167    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	W0719 15:52:30.103182    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	I0719 15:52:30.103217    1544 addons.go:69] Setting default-storageclass=true in profile "addons-101000"
	W0719 15:52:30.103227    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	W0719 15:52:30.103229    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	I0719 15:52:30.103236    1544 addons.go:69] Setting gcp-auth=true in profile "addons-101000"
	W0719 15:52:30.103647    1544 host.go:54] host status for "addons-101000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/monitor: connect: connection refused
	W0719 15:52:30.115464    1544 addons.go:277] "addons-101000" is not running, setting registry=true and skipping enablement (err=<nil>)
	W0719 15:52:30.115470    1544 addons.go:277] "addons-101000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	W0719 15:52:30.115473    1544 addons.go:277] "addons-101000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	W0719 15:52:30.115476    1544 addons.go:277] "addons-101000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	I0719 15:52:30.115487    1544 mustload.go:65] Loading cluster: addons-101000
	W0719 15:52:30.115490    1544 addons.go:277] "addons-101000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	I0719 15:52:30.115530    1544 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-101000"
	W0719 15:52:30.115554    1544 addons.go:277] "addons-101000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	I0719 15:52:30.115896    1544 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0719 15:52:30.118371    1544 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0719 15:52:30.125455    1544 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0719 15:52:30.125463    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0719 15:52:30.125471    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:30.122419    1544 addons.go:467] Verifying addon registry=true in "addons-101000"
	I0719 15:52:30.122435    1544 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.3
	I0719 15:52:30.133254    1544 out.go:177] * Verifying registry addon...
	I0719 15:52:30.122534    1544 config.go:182] Loaded profile config "addons-101000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 15:52:30.122472    1544 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-101000"
	I0719 15:52:30.126814    1544 addons.go:231] Setting addon default-storageclass=true in "addons-101000"
	I0719 15:52:30.127301    1544 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0719 15:52:30.129401    1544 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0719 15:52:30.136476    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0719 15:52:30.136487    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:30.139357    1544 out.go:177] * Verifying csi-hostpath-driver addon...
	I0719 15:52:30.136602    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.137003    1544 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0719 15:52:30.137526    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:30.146971    1544 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0719 15:52:30.147364    1544 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 15:52:30.147370    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 15:52:30.147377    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:30.154043    1544 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0719 15:52:30.155110    1544 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0719 15:52:30.167928    1544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0719 15:52:30.178534    1544 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0719 15:52:30.178543    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0719 15:52:30.183588    1544 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0719 15:52:30.183597    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0719 15:52:30.191691    1544 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0719 15:52:30.191699    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0719 15:52:30.203614    1544 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0719 15:52:30.203623    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0719 15:52:30.208695    1544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 15:52:30.219630    1544 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0719 15:52:30.219641    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0719 15:52:30.228005    1544 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0719 15:52:30.228015    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0719 15:52:30.232889    1544 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 15:52:30.232896    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0719 15:52:30.237389    1544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 15:52:30.293905    1544 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:52:30.293918    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0719 15:52:30.323074    1544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0719 15:52:30.620570    1544 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-101000" context rescaled to 1 replicas
	I0719 15:52:30.620588    1544 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 15:52:30.627928    1544 out.go:177] * Verifying Kubernetes components...
	I0719 15:52:30.631983    1544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:30.771514    1544 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	W0719 15:52:30.880793    1544 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0719 15:52:30.880817    1544 retry.go:31] will retry after 267.649411ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0719 15:52:30.944103    1544 addons.go:467] Verifying addon metrics-server=true in "addons-101000"
	I0719 15:52:30.944554    1544 node_ready.go:35] waiting up to 6m0s for node "addons-101000" to be "Ready" ...
	I0719 15:52:30.946971    1544 node_ready.go:49] node "addons-101000" has status "Ready":"True"
	I0719 15:52:30.946984    1544 node_ready.go:38] duration metric: took 2.412542ms waiting for node "addons-101000" to be "Ready" ...
	I0719 15:52:30.946988    1544 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:30.951440    1544 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-4dqmf" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:31.148622    1544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0719 15:52:32.960900    1544 pod_ready.go:102] pod "coredns-5d78c9869d-4dqmf" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:33.460293    1544 pod_ready.go:92] pod "coredns-5d78c9869d-4dqmf" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:33.460303    1544 pod_ready.go:81] duration metric: took 2.50887925s waiting for pod "coredns-5d78c9869d-4dqmf" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.460308    1544 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-knvd5" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.463445    1544 pod_ready.go:92] pod "coredns-5d78c9869d-knvd5" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:33.463453    1544 pod_ready.go:81] duration metric: took 3.140459ms waiting for pod "coredns-5d78c9869d-knvd5" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.463458    1544 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.466021    1544 pod_ready.go:92] pod "etcd-addons-101000" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:33.466025    1544 pod_ready.go:81] duration metric: took 2.564083ms waiting for pod "etcd-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.466029    1544 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.468619    1544 pod_ready.go:92] pod "kube-apiserver-addons-101000" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:33.468625    1544 pod_ready.go:81] duration metric: took 2.592875ms waiting for pod "kube-apiserver-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.468629    1544 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.471097    1544 pod_ready.go:92] pod "kube-controller-manager-addons-101000" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:33.471103    1544 pod_ready.go:81] duration metric: took 2.47075ms waiting for pod "kube-controller-manager-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.471106    1544 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jpdlk" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.691576    1544 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.542959542s)
	I0719 15:52:33.857829    1544 pod_ready.go:92] pod "kube-proxy-jpdlk" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:33.857838    1544 pod_ready.go:81] duration metric: took 386.731917ms waiting for pod "kube-proxy-jpdlk" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:33.857843    1544 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:34.259787    1544 pod_ready.go:92] pod "kube-scheduler-addons-101000" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:34.259798    1544 pod_ready.go:81] duration metric: took 401.956208ms waiting for pod "kube-scheduler-addons-101000" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:34.259802    1544 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-844d8db974-vt8ml" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:36.666794    1544 pod_ready.go:102] pod "metrics-server-844d8db974-vt8ml" in "kube-system" namespace has status "Ready":"False"
	I0719 15:52:36.752105    1544 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0719 15:52:36.752122    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:36.786675    1544 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0719 15:52:36.794559    1544 addons.go:231] Setting addon gcp-auth=true in "addons-101000"
	I0719 15:52:36.794583    1544 host.go:66] Checking if "addons-101000" exists ...
	I0719 15:52:36.795348    1544 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0719 15:52:36.795361    1544 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/addons-101000/id_rsa Username:docker}
	I0719 15:52:36.829668    1544 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0719 15:52:36.833634    1544 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0719 15:52:36.837615    1544 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0719 15:52:36.837621    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0719 15:52:36.843015    1544 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0719 15:52:36.843020    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0719 15:52:36.847964    1544 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0719 15:52:36.847971    1544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0719 15:52:36.853694    1544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0719 15:52:37.162175    1544 pod_ready.go:92] pod "metrics-server-844d8db974-vt8ml" in "kube-system" namespace has status "Ready":"True"
	I0719 15:52:37.162186    1544 pod_ready.go:81] duration metric: took 2.902411833s waiting for pod "metrics-server-844d8db974-vt8ml" in "kube-system" namespace to be "Ready" ...
	I0719 15:52:37.162191    1544 pod_ready.go:38] duration metric: took 6.215265042s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 15:52:37.162200    1544 api_server.go:52] waiting for apiserver process to appear ...
	I0719 15:52:37.162261    1544 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 15:52:37.213333    1544 api_server.go:72] duration metric: took 6.592798333s to wait for apiserver process to appear ...
	I0719 15:52:37.213345    1544 api_server.go:88] waiting for apiserver healthz status ...
	I0719 15:52:37.213352    1544 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0719 15:52:37.213987    1544 addons.go:467] Verifying addon gcp-auth=true in "addons-101000"
	I0719 15:52:37.217265    1544 out.go:177] * Verifying gcp-auth addon...
	I0719 15:52:37.224564    1544 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0719 15:52:37.226285    1544 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0719 15:52:37.227830    1544 api_server.go:141] control plane version: v1.27.3
	I0719 15:52:37.227837    1544 api_server.go:131] duration metric: took 14.489167ms to wait for apiserver health ...
	I0719 15:52:37.227841    1544 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 15:52:37.231500    1544 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0719 15:52:37.231508    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:37.232481    1544 system_pods.go:59] 10 kube-system pods found
	I0719 15:52:37.232488    1544 system_pods.go:61] "coredns-5d78c9869d-4dqmf" [3335e5bb-cd1b-4f88-8662-5f52c470886e] Running
	I0719 15:52:37.232491    1544 system_pods.go:61] "coredns-5d78c9869d-knvd5" [8982b7a7-b4c0-4d8f-aaac-24b2aa68727d] Running
	I0719 15:52:37.232493    1544 system_pods.go:61] "etcd-addons-101000" [76e23cae-d673-4ce3-8c1a-28173c4ad7fb] Running
	I0719 15:52:37.232495    1544 system_pods.go:61] "kube-apiserver-addons-101000" [b5b978e3-f407-4753-974c-733ac90e64b3] Running
	I0719 15:52:37.232497    1544 system_pods.go:61] "kube-controller-manager-addons-101000" [d4160a3e-e6e8-450b-8484-7fe93b6a935c] Running
	I0719 15:52:37.232500    1544 system_pods.go:61] "kube-proxy-jpdlk" [e6fd914c-3322-4343-bfc3-e20cc61a9c1b] Running
	I0719 15:52:37.232503    1544 system_pods.go:61] "kube-scheduler-addons-101000" [22d2fcb7-43d8-4531-958e-506bd7b1d4b6] Running
	I0719 15:52:37.232506    1544 system_pods.go:61] "metrics-server-844d8db974-vt8ml" [a25120a0-a3f2-4a32-851b-21a7b451818f] Running
	I0719 15:52:37.232510    1544 system_pods.go:61] "snapshot-controller-75bbb956b9-9qbz2" [449931f4-f646-44cb-b8ae-3246b6e14db1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 15:52:37.232514    1544 system_pods.go:61] "snapshot-controller-75bbb956b9-gsppf" [358b5d0e-1f68-47f3-bc7d-ebbf387b2e40] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 15:52:37.232518    1544 system_pods.go:74] duration metric: took 4.674833ms to wait for pod list to return data ...
	I0719 15:52:37.232523    1544 default_sa.go:34] waiting for default service account to be created ...
	I0719 15:52:37.235194    1544 default_sa.go:45] found service account: "default"
	I0719 15:52:37.235203    1544 default_sa.go:55] duration metric: took 2.676875ms for default service account to be created ...
	I0719 15:52:37.235207    1544 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 15:52:37.262055    1544 system_pods.go:86] 10 kube-system pods found
	I0719 15:52:37.262063    1544 system_pods.go:89] "coredns-5d78c9869d-4dqmf" [3335e5bb-cd1b-4f88-8662-5f52c470886e] Running
	I0719 15:52:37.262066    1544 system_pods.go:89] "coredns-5d78c9869d-knvd5" [8982b7a7-b4c0-4d8f-aaac-24b2aa68727d] Running
	I0719 15:52:37.262069    1544 system_pods.go:89] "etcd-addons-101000" [76e23cae-d673-4ce3-8c1a-28173c4ad7fb] Running
	I0719 15:52:37.262071    1544 system_pods.go:89] "kube-apiserver-addons-101000" [b5b978e3-f407-4753-974c-733ac90e64b3] Running
	I0719 15:52:37.262073    1544 system_pods.go:89] "kube-controller-manager-addons-101000" [d4160a3e-e6e8-450b-8484-7fe93b6a935c] Running
	I0719 15:52:37.262075    1544 system_pods.go:89] "kube-proxy-jpdlk" [e6fd914c-3322-4343-bfc3-e20cc61a9c1b] Running
	I0719 15:52:37.262078    1544 system_pods.go:89] "kube-scheduler-addons-101000" [22d2fcb7-43d8-4531-958e-506bd7b1d4b6] Running
	I0719 15:52:37.262080    1544 system_pods.go:89] "metrics-server-844d8db974-vt8ml" [a25120a0-a3f2-4a32-851b-21a7b451818f] Running
	I0719 15:52:37.262085    1544 system_pods.go:89] "snapshot-controller-75bbb956b9-9qbz2" [449931f4-f646-44cb-b8ae-3246b6e14db1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 15:52:37.262089    1544 system_pods.go:89] "snapshot-controller-75bbb956b9-gsppf" [358b5d0e-1f68-47f3-bc7d-ebbf387b2e40] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0719 15:52:37.262093    1544 system_pods.go:126] duration metric: took 26.883625ms to wait for k8s-apps to be running ...
	I0719 15:52:37.262096    1544 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 15:52:37.262153    1544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 15:52:37.267299    1544 system_svc.go:56] duration metric: took 5.200291ms WaitForService to wait for kubelet.
	I0719 15:52:37.267305    1544 kubeadm.go:581] duration metric: took 6.646776125s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0719 15:52:37.267313    1544 node_conditions.go:102] verifying NodePressure condition ...
	I0719 15:52:37.460427    1544 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0719 15:52:37.460461    1544 node_conditions.go:123] node cpu capacity is 2
	I0719 15:52:37.460466    1544 node_conditions.go:105] duration metric: took 193.152542ms to run NodePressure ...
	I0719 15:52:37.460471    1544 start.go:228] waiting for startup goroutines ...
	I0719 15:52:37.735761    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:38.234684    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:38.735371    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:39.235765    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:39.735907    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:40.235373    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:40.734681    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:41.235287    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:41.735207    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:42.235545    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:42.734812    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:43.235286    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:43.736840    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:44.235205    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:44.738509    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:45.235248    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:45.735521    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:46.236102    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:46.735748    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:47.235790    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:47.736196    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:48.235182    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:48.735185    1544 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0719 15:52:49.237494    1544 kapi.go:107] duration metric: took 12.013055167s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0719 15:52:49.242337    1544 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-101000 cluster.
	I0719 15:52:49.247318    1544 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0719 15:52:49.251640    1544 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0719 15:58:30.120713    1544 kapi.go:107] duration metric: took 6m0.008647875s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0719 15:58:30.121029    1544 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0719 15:58:30.144635    1544 kapi.go:107] duration metric: took 6m0.001552791s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0719 15:58:30.144676    1544 kapi.go:107] duration metric: took 6m0.011563667s to wait for kubernetes.io/minikube-addons=registry ...
	W0719 15:58:30.144755    1544 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	W0719 15:58:30.144795    1544 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0719 15:58:30.152624    1544 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, cloud-spanner, inspektor-gadget, default-storageclass, metrics-server, volumesnapshots, gcp-auth
	I0719 15:58:30.164713    1544 addons.go:502] enable addons completed in 6m0.066202625s: enabled=[ingress-dns storage-provisioner cloud-spanner inspektor-gadget default-storageclass metrics-server volumesnapshots gcp-auth]
	I0719 15:58:30.164763    1544 start.go:233] waiting for cluster config update ...
	I0719 15:58:30.164788    1544 start.go:242] writing updated cluster config ...
	I0719 15:58:30.169625    1544 ssh_runner.go:195] Run: rm -f paused
	I0719 15:58:30.237703    1544 start.go:578] kubectl: 1.27.2, cluster: 1.27.3 (minor skew: 0)
	I0719 15:58:30.241711    1544 out.go:177] * Done! kubectl is now configured to use "addons-101000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-07-19 22:51:58 UTC, ends at Wed 2023-07-19 23:12:08 UTC. --
	Jul 19 22:52:44 addons-101000 dockerd[1155]: time="2023-07-19T22:52:44.191384231Z" level=warning msg="cleaning up after shim disconnected" id=feffc055991a5f9040fb2443f3d7d925f26478390a78d13e35fcf50a7b2bd9b3 namespace=moby
	Jul 19 22:52:44 addons-101000 dockerd[1155]: time="2023-07-19T22:52:44.191405671Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 22:52:45 addons-101000 dockerd[1149]: time="2023-07-19T22:52:45.210876361Z" level=info msg="ignoring event" container=ff082e616338f9ee177498a150f356af0b22b9e031e5c62344154819899b6b1b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 22:52:45 addons-101000 dockerd[1155]: time="2023-07-19T22:52:45.211082068Z" level=info msg="shim disconnected" id=ff082e616338f9ee177498a150f356af0b22b9e031e5c62344154819899b6b1b namespace=moby
	Jul 19 22:52:45 addons-101000 dockerd[1155]: time="2023-07-19T22:52:45.211367980Z" level=warning msg="cleaning up after shim disconnected" id=ff082e616338f9ee177498a150f356af0b22b9e031e5c62344154819899b6b1b namespace=moby
	Jul 19 22:52:45 addons-101000 dockerd[1155]: time="2023-07-19T22:52:45.211378115Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 22:52:45 addons-101000 dockerd[1155]: time="2023-07-19T22:52:45.336810596Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 22:52:45 addons-101000 dockerd[1155]: time="2023-07-19T22:52:45.336856434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 22:52:45 addons-101000 dockerd[1155]: time="2023-07-19T22:52:45.336888007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 22:52:45 addons-101000 dockerd[1155]: time="2023-07-19T22:52:45.336900019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 22:52:45 addons-101000 cri-dockerd[1051]: time="2023-07-19T22:52:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0ee1896a26320c3f2a0276be91800f69e284fd28f8830c62afec6149f3e01934/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 19 22:52:45 addons-101000 dockerd[1149]: time="2023-07-19T22:52:45.710423798Z" level=warning msg="reference for unknown type: " digest="sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Jul 19 22:52:48 addons-101000 cri-dockerd[1051]: time="2023-07-19T22:52:48Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Jul 19 22:52:48 addons-101000 dockerd[1155]: time="2023-07-19T22:52:48.502093407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 22:52:48 addons-101000 dockerd[1155]: time="2023-07-19T22:52:48.502123765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 22:52:48 addons-101000 dockerd[1155]: time="2023-07-19T22:52:48.502132147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 22:52:48 addons-101000 dockerd[1155]: time="2023-07-19T22:52:48.502138277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 23:10:37 addons-101000 dockerd[1155]: time="2023-07-19T23:10:37.526761438Z" level=info msg="shim disconnected" id=1916a4d50b456c96dde6cfb8c9a0592f96f7f3a97e32a9be366ec87324697279 namespace=moby
	Jul 19 23:10:37 addons-101000 dockerd[1155]: time="2023-07-19T23:10:37.526811146Z" level=warning msg="cleaning up after shim disconnected" id=1916a4d50b456c96dde6cfb8c9a0592f96f7f3a97e32a9be366ec87324697279 namespace=moby
	Jul 19 23:10:37 addons-101000 dockerd[1155]: time="2023-07-19T23:10:37.526818938Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 23:10:37 addons-101000 dockerd[1149]: time="2023-07-19T23:10:37.527016893Z" level=info msg="ignoring event" container=1916a4d50b456c96dde6cfb8c9a0592f96f7f3a97e32a9be366ec87324697279 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 23:10:37 addons-101000 dockerd[1155]: time="2023-07-19T23:10:37.596743616Z" level=info msg="shim disconnected" id=6777dc48b461a60a564d4c6d897d03aa240f0f002485e017fbb91565bbbc234a namespace=moby
	Jul 19 23:10:37 addons-101000 dockerd[1155]: time="2023-07-19T23:10:37.596774491Z" level=warning msg="cleaning up after shim disconnected" id=6777dc48b461a60a564d4c6d897d03aa240f0f002485e017fbb91565bbbc234a namespace=moby
	Jul 19 23:10:37 addons-101000 dockerd[1155]: time="2023-07-19T23:10:37.596779074Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 23:10:37 addons-101000 dockerd[1149]: time="2023-07-19T23:10:37.596907072Z" level=info msg="ignoring event" container=6777dc48b461a60a564d4c6d897d03aa240f0f002485e017fbb91565bbbc234a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                     CREATED             STATE               NAME                         ATTEMPT             POD ID
	1cc3491bfa533       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf              19 minutes ago      Running             gcp-auth                     0                   0ee1896a26320
	2b9a59faa57f0       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   19 minutes ago      Running             volume-snapshot-controller   0                   4d989e719f82a
	d47989aa362eb       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   19 minutes ago      Running             volume-snapshot-controller   0                   16bf9bed7271f
	0df05b2c74afc       97e04611ad434                                                                                                             19 minutes ago      Running             coredns                      0                   49ab25acb4281
	19588c52e552d       fb73e92641fd5                                                                                                             19 minutes ago      Running             kube-proxy                   0                   4d2bba9dbbd12
	f959c7f626d6e       24bc64e911039                                                                                                             19 minutes ago      Running             etcd                         0                   e9214702e68a5
	c6c632dd083f2       bcb9e554eaab6                                                                                                             19 minutes ago      Running             kube-scheduler               0                   956f93b928e2f
	862babcc9993e       ab3683b584ae5                                                                                                             19 minutes ago      Running             kube-controller-manager      0                   81af0dc9e0f17
	5984dda0d68af       39dfb036b0986                                                                                                             19 minutes ago      Running             kube-apiserver               0                   b8a23dc6dd212
	
	* 
	* ==> coredns [0df05b2c74af] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38035 - 43333 "HINFO IN 3178013197050500524.6871022848785512211. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004455439s
	[INFO] 10.244.0.9:44781 - 2820 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000096121s
	[INFO] 10.244.0.9:49458 - 31580 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000042327s
	[INFO] 10.244.0.9:47092 - 42379 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000038365s
	[INFO] 10.244.0.9:51547 - 25508 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000044495s
	[INFO] 10.244.0.9:33952 - 7053 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000043995s
	[INFO] 10.244.0.9:46272 - 9283 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000029482s
	[INFO] 10.244.0.9:44297 - 58626 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001146116s
	[INFO] 10.244.0.9:36915 - 33133 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 496 0.001094699s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-101000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-101000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd270bde4b7e946c995f2329996587ae45fe53d4
	                    minikube.k8s.io/name=addons-101000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_19T15_52_16_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Jul 2023 22:52:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-101000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Jul 2023 23:12:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Jul 2023 23:08:36 +0000   Wed, 19 Jul 2023 22:52:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Jul 2023 23:08:36 +0000   Wed, 19 Jul 2023 22:52:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Jul 2023 23:08:36 +0000   Wed, 19 Jul 2023 22:52:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Jul 2023 23:08:36 +0000   Wed, 19 Jul 2023 22:52:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-101000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 06f7171a2e3b478b8a006d3ed11bcad4
	  System UUID:                06f7171a2e3b478b8a006d3ed11bcad4
	  Boot ID:                    388cb244-002c-43f0-bc4d-d5cefb6c596c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-58478865f7-hfg7x                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 coredns-5d78c9869d-knvd5                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     19m
	  kube-system                 etcd-addons-101000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kube-apiserver-addons-101000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-addons-101000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-jpdlk                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-addons-101000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 snapshot-controller-75bbb956b9-9qbz2     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 snapshot-controller-75bbb956b9-gsppf     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node addons-101000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node addons-101000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node addons-101000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node addons-101000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node addons-101000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m                kubelet          Node addons-101000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                19m                kubelet          Node addons-101000 status is now: NodeReady
	  Normal  RegisteredNode           19m                node-controller  Node addons-101000 event: Registered Node addons-101000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] KASLR disabled due to lack of seed
	[  +0.640919] EINJ: EINJ table not found.
	[  +0.493985] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.044020] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000805] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[Jul19 22:52] systemd-fstab-generator[483]: Ignoring "noauto" for root device
	[  +0.066120] systemd-fstab-generator[494]: Ignoring "noauto" for root device
	[  +0.415432] systemd-fstab-generator[754]: Ignoring "noauto" for root device
	[  +0.174029] systemd-fstab-generator[789]: Ignoring "noauto" for root device
	[  +0.074321] systemd-fstab-generator[800]: Ignoring "noauto" for root device
	[  +0.082963] systemd-fstab-generator[813]: Ignoring "noauto" for root device
	[  +1.234248] systemd-fstab-generator[971]: Ignoring "noauto" for root device
	[  +0.083333] systemd-fstab-generator[982]: Ignoring "noauto" for root device
	[  +0.083834] systemd-fstab-generator[993]: Ignoring "noauto" for root device
	[  +0.077345] systemd-fstab-generator[1004]: Ignoring "noauto" for root device
	[  +0.087881] systemd-fstab-generator[1044]: Ignoring "noauto" for root device
	[  +2.511224] systemd-fstab-generator[1142]: Ignoring "noauto" for root device
	[  +2.197027] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.212424] systemd-fstab-generator[1454]: Ignoring "noauto" for root device
	[  +5.140733] systemd-fstab-generator[2321]: Ignoring "noauto" for root device
	[ +14.969880] kauditd_printk_skb: 41 callbacks suppressed
	[  +1.331585] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +4.144611] kauditd_printk_skb: 47 callbacks suppressed
	[  +7.075011] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.120561] kauditd_printk_skb: 3 callbacks suppressed
	
	* 
	* ==> etcd [f959c7f626d6] <==
	* {"level":"info","ts":"2023-07-19T22:52:12.834Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","added-peer-id":"c46d288d2fcb0590","added-peer-peer-urls":["https://192.168.105.2:2380"]}
	{"level":"info","ts":"2023-07-19T22:52:13.557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2023-07-19T22:52:13.557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-07-19T22:52:13.557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2023-07-19T22:52:13.557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2023-07-19T22:52:13.557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-07-19T22:52:13.557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2023-07-19T22:52:13.557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-07-19T22:52:13.558Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-101000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-19T22:52:13.558Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-19T22:52:13.558Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-19T22:52:13.558Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-19T22:52:13.558Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-19T22:52:13.558Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-19T22:52:13.559Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-07-19T22:52:13.559Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-19T22:52:13.559Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-19T22:52:13.559Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-19T22:52:13.559Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-19T23:02:13.581Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":796}
	{"level":"info","ts":"2023-07-19T23:02:13.583Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":796,"took":"1.901103ms","hash":1438469743}
	{"level":"info","ts":"2023-07-19T23:02:13.583Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1438469743,"revision":796,"compact-revision":-1}
	{"level":"info","ts":"2023-07-19T23:07:13.591Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":947}
	{"level":"info","ts":"2023-07-19T23:07:13.593Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":947,"took":"1.234408ms","hash":3746673681}
	{"level":"info","ts":"2023-07-19T23:07:13.593Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3746673681,"revision":947,"compact-revision":796}
	
	* 
	* ==> gcp-auth [1cc3491bfa53] <==
	* 2023/07/19 22:52:48 GCP Auth Webhook started!
	
	* 
	* ==> kernel <==
	*  23:12:09 up 20 min,  0 users,  load average: 0.16, 0.27, 0.26
	Linux addons-101000 5.10.57 #1 SMP PREEMPT Fri Jul 14 22:49:12 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [5984dda0d68a] <==
	* I0719 23:02:14.242891       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 23:02:14.242923       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 23:02:14.252138       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:03:14.170675       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:04:14.169449       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:05:14.171272       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:06:14.171498       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:07:14.168568       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:07:14.244513       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0719 23:07:14.245028       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0719 23:07:14.260797       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:08:14.169738       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:09:14.170808       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0719 23:10:14.170454       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0719 23:10:38.206024       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0719 23:10:38.206055       1 handler_proxy.go:100] no RequestInfo found in the context
	E0719 23:10:38.206097       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 23:10:38.206106       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0719 23:10:38.206140       1 controller.go:132] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0719 23:11:38.206606       1 handler_proxy.go:144] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0719 23:11:38.206668       1 handler_proxy.go:100] no RequestInfo found in the context
	E0719 23:11:38.206959       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0719 23:11:38.207005       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [862babcc9993] <==
	* I0719 22:52:43.092574       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0719 22:52:43.110448       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:52:44.118612       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:52:44.212265       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0719 22:52:45.139777       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:52:45.152565       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0719 22:52:45.216322       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0719 22:52:45.218685       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0719 22:52:45.220523       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0719 22:52:45.220604       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0719 22:52:45.235773       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:52:46.163044       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:52:46.176901       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:52:46.183652       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:52:46.183863       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0719 22:52:46.193621       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:52:59.164951       1 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0719 22:52:59.165085       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0719 22:52:59.266614       1 shared_informer.go:318] Caches are synced for resource quota
	I0719 22:52:59.590599       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0719 22:52:59.691557       1 shared_informer.go:318] Caches are synced for garbage collector
	I0719 22:53:15.026654       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0719 22:53:15.048574       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0719 22:53:16.014227       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0719 22:53:16.037423       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	
	* 
	* ==> kube-proxy [19588c52e552] <==
	* I0719 22:52:31.940544       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0719 22:52:31.940603       1 server_others.go:110] "Detected node IP" address="192.168.105.2"
	I0719 22:52:31.940629       1 server_others.go:554] "Using iptables proxy"
	I0719 22:52:31.978237       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0719 22:52:31.978247       1 server_others.go:192] "Using iptables Proxier"
	I0719 22:52:31.978272       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 22:52:31.978547       1 server.go:658] "Version info" version="v1.27.3"
	I0719 22:52:31.978554       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 22:52:31.980358       1 config.go:188] "Starting service config controller"
	I0719 22:52:31.980401       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0719 22:52:31.980465       1 config.go:97] "Starting endpoint slice config controller"
	I0719 22:52:31.980482       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0719 22:52:31.980953       1 config.go:315] "Starting node config controller"
	I0719 22:52:31.980984       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0719 22:52:32.081146       1 shared_informer.go:318] Caches are synced for node config
	I0719 22:52:32.081155       1 shared_informer.go:318] Caches are synced for service config
	I0719 22:52:32.081163       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [c6c632dd083f] <==
	* W0719 22:52:14.250631       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 22:52:14.250639       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 22:52:14.250680       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 22:52:14.250689       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0719 22:52:14.250732       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 22:52:14.250739       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0719 22:52:14.250771       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 22:52:14.250778       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 22:52:14.250830       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 22:52:14.250837       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 22:52:14.250850       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 22:52:14.250875       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0719 22:52:14.250933       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 22:52:14.250977       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0719 22:52:14.251012       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 22:52:14.251046       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 22:52:14.251076       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 22:52:14.251088       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 22:52:15.080697       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 22:52:15.080736       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0719 22:52:15.086216       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0719 22:52:15.086240       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0719 22:52:15.262476       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 22:52:15.262526       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0719 22:52:15.546346       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-07-19 22:51:58 UTC, ends at Wed 2023-07-19 23:12:09 UTC. --
	Jul 19 23:08:16 addons-101000 kubelet[2341]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 19 23:09:16 addons-101000 kubelet[2341]: E0719 23:09:16.799632    2341 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 19 23:09:16 addons-101000 kubelet[2341]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 23:09:16 addons-101000 kubelet[2341]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 23:09:16 addons-101000 kubelet[2341]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 19 23:10:16 addons-101000 kubelet[2341]: E0719 23:10:16.804319    2341 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 19 23:10:16 addons-101000 kubelet[2341]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 23:10:16 addons-101000 kubelet[2341]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 23:10:16 addons-101000 kubelet[2341]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 19 23:10:37 addons-101000 kubelet[2341]: I0719 23:10:37.730407    2341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a25120a0-a3f2-4a32-851b-21a7b451818f-tmp-dir\") pod \"a25120a0-a3f2-4a32-851b-21a7b451818f\" (UID: \"a25120a0-a3f2-4a32-851b-21a7b451818f\") "
	Jul 19 23:10:37 addons-101000 kubelet[2341]: I0719 23:10:37.730428    2341 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcm69\" (UniqueName: \"kubernetes.io/projected/a25120a0-a3f2-4a32-851b-21a7b451818f-kube-api-access-fcm69\") pod \"a25120a0-a3f2-4a32-851b-21a7b451818f\" (UID: \"a25120a0-a3f2-4a32-851b-21a7b451818f\") "
	Jul 19 23:10:37 addons-101000 kubelet[2341]: W0719 23:10:37.730470    2341 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/a25120a0-a3f2-4a32-851b-21a7b451818f/volumes/kubernetes.io~empty-dir/tmp-dir: clearQuota called, but quotas disabled
	Jul 19 23:10:37 addons-101000 kubelet[2341]: I0719 23:10:37.730516    2341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a25120a0-a3f2-4a32-851b-21a7b451818f-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "a25120a0-a3f2-4a32-851b-21a7b451818f" (UID: "a25120a0-a3f2-4a32-851b-21a7b451818f"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Jul 19 23:10:37 addons-101000 kubelet[2341]: I0719 23:10:37.732919    2341 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a25120a0-a3f2-4a32-851b-21a7b451818f-kube-api-access-fcm69" (OuterVolumeSpecName: "kube-api-access-fcm69") pod "a25120a0-a3f2-4a32-851b-21a7b451818f" (UID: "a25120a0-a3f2-4a32-851b-21a7b451818f"). InnerVolumeSpecName "kube-api-access-fcm69". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jul 19 23:10:37 addons-101000 kubelet[2341]: I0719 23:10:37.832250    2341 reconciler_common.go:300] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a25120a0-a3f2-4a32-851b-21a7b451818f-tmp-dir\") on node \"addons-101000\" DevicePath \"\""
	Jul 19 23:10:37 addons-101000 kubelet[2341]: I0719 23:10:37.832267    2341 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-fcm69\" (UniqueName: \"kubernetes.io/projected/a25120a0-a3f2-4a32-851b-21a7b451818f-kube-api-access-fcm69\") on node \"addons-101000\" DevicePath \"\""
	Jul 19 23:10:38 addons-101000 kubelet[2341]: I0719 23:10:38.541669    2341 scope.go:115] "RemoveContainer" containerID="1916a4d50b456c96dde6cfb8c9a0592f96f7f3a97e32a9be366ec87324697279"
	Jul 19 23:10:38 addons-101000 kubelet[2341]: I0719 23:10:38.572299    2341 scope.go:115] "RemoveContainer" containerID="1916a4d50b456c96dde6cfb8c9a0592f96f7f3a97e32a9be366ec87324697279"
	Jul 19 23:10:38 addons-101000 kubelet[2341]: E0719 23:10:38.573121    2341 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 1916a4d50b456c96dde6cfb8c9a0592f96f7f3a97e32a9be366ec87324697279" containerID="1916a4d50b456c96dde6cfb8c9a0592f96f7f3a97e32a9be366ec87324697279"
	Jul 19 23:10:38 addons-101000 kubelet[2341]: I0719 23:10:38.573165    2341 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:docker ID:1916a4d50b456c96dde6cfb8c9a0592f96f7f3a97e32a9be366ec87324697279} err="failed to get container status \"1916a4d50b456c96dde6cfb8c9a0592f96f7f3a97e32a9be366ec87324697279\": rpc error: code = Unknown desc = Error response from daemon: No such container: 1916a4d50b456c96dde6cfb8c9a0592f96f7f3a97e32a9be366ec87324697279"
	Jul 19 23:10:38 addons-101000 kubelet[2341]: I0719 23:10:38.811194    2341 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=a25120a0-a3f2-4a32-851b-21a7b451818f path="/var/lib/kubelet/pods/a25120a0-a3f2-4a32-851b-21a7b451818f/volumes"
	Jul 19 23:11:16 addons-101000 kubelet[2341]: E0719 23:11:16.803860    2341 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 19 23:11:16 addons-101000 kubelet[2341]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 23:11:16 addons-101000 kubelet[2341]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 23:11:16 addons-101000 kubelet[2341]:  > table=nat chain=KUBE-KUBELET-CANARY
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-101000 -n addons-101000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-101000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/CloudSpanner FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/CloudSpanner (819.15s)

                                                
                                    
x
+
TestCertOptions (10.13s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-225000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-225000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.85366975s)

                                                
                                                
-- stdout --
	* [cert-options-225000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-options-225000 in cluster cert-options-225000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-225000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-225000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-225000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-225000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-225000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 89 (79.825958ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-225000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-225000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 89
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-225000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-225000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-225000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 89 (39.450959ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-225000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-225000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 89
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-225000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2023-07-19 16:37:53.828252 -0700 PDT m=+2801.878380293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-225000 -n cert-options-225000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-225000 -n cert-options-225000: exit status 7 (28.38925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-225000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-225000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-225000
--- FAIL: TestCertOptions (10.13s)

                                                
                                    
x
+
TestCertExpiration (196.77s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-765000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-765000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.742959542s)

                                                
                                                
-- stdout --
	* [cert-expiration-765000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-expiration-765000 in cluster cert-expiration-765000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-765000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-765000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-765000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-765000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-765000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (6.850448917s)

                                                
                                                
-- stdout --
	* [cert-expiration-765000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-765000 in cluster cert-expiration-765000
	* Restarting existing qemu2 VM for "cert-expiration-765000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-765000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-765000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-765000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-765000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-765000 in cluster cert-expiration-765000
	* Restarting existing qemu2 VM for "cert-expiration-765000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-765000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-765000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-07-19 16:40:55.392626 -0700 PDT m=+2983.425894418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-765000 -n cert-expiration-765000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-765000 -n cert-expiration-765000: exit status 7 (69.943208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-765000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-765000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-765000
--- FAIL: TestCertExpiration (196.77s)

                                                
                                    
x
+
TestDockerFlags (9.89s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-538000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-538000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.651568583s)

                                                
                                                
-- stdout --
	* [docker-flags-538000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node docker-flags-538000 in cluster docker-flags-538000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-538000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:37:33.956352    3767 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:37:33.956484    3767 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:37:33.956487    3767 out.go:309] Setting ErrFile to fd 2...
	I0719 16:37:33.956490    3767 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:37:33.956593    3767 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:37:33.957596    3767 out.go:303] Setting JSON to false
	I0719 16:37:33.972570    3767 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4024,"bootTime":1689805829,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:37:33.972637    3767 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:37:33.977836    3767 out.go:177] * [docker-flags-538000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:37:33.985941    3767 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:37:33.988732    3767 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:37:33.986004    3767 notify.go:220] Checking for updates...
	I0719 16:37:33.994745    3767 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:37:33.996091    3767 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:37:33.998752    3767 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:37:34.001757    3767 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:37:34.005179    3767 config.go:182] Loaded profile config "force-systemd-flag-992000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:37:34.005248    3767 config.go:182] Loaded profile config "multinode-992000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:37:34.005291    3767 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:37:34.009705    3767 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 16:37:34.016773    3767 start.go:298] selected driver: qemu2
	I0719 16:37:34.016778    3767 start.go:880] validating driver "qemu2" against <nil>
	I0719 16:37:34.016786    3767 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:37:34.018573    3767 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0719 16:37:34.021751    3767 out.go:177] * Automatically selected the socket_vmnet network
	I0719 16:37:34.024849    3767 start_flags.go:914] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0719 16:37:34.024873    3767 cni.go:84] Creating CNI manager for ""
	I0719 16:37:34.024881    3767 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 16:37:34.024892    3767 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 16:37:34.024898    3767 start_flags.go:319] config:
	{Name:docker-flags-538000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:docker-flags-538000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0}
	I0719 16:37:34.028933    3767 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:37:34.035750    3767 out.go:177] * Starting control plane node docker-flags-538000 in cluster docker-flags-538000
	I0719 16:37:34.039779    3767 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 16:37:34.039812    3767 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 16:37:34.039823    3767 cache.go:57] Caching tarball of preloaded images
	I0719 16:37:34.039897    3767 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 16:37:34.039902    3767 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0719 16:37:34.039969    3767 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/docker-flags-538000/config.json ...
	I0719 16:37:34.039981    3767 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/docker-flags-538000/config.json: {Name:mk0ec9e7d01a015b4fd3d4808302aee498b235f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:37:34.040190    3767 start.go:365] acquiring machines lock for docker-flags-538000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:37:34.040223    3767 start.go:369] acquired machines lock for "docker-flags-538000" in 25.042µs
	I0719 16:37:34.040234    3767 start.go:93] Provisioning new machine with config: &{Name:docker-flags-538000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.27.3 ClusterName:docker-flags-538000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:37:34.040261    3767 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:37:34.044738    3767 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0719 16:37:34.060595    3767 start.go:159] libmachine.API.Create for "docker-flags-538000" (driver="qemu2")
	I0719 16:37:34.060614    3767 client.go:168] LocalClient.Create starting
	I0719 16:37:34.060678    3767 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:37:34.060699    3767 main.go:141] libmachine: Decoding PEM data...
	I0719 16:37:34.060710    3767 main.go:141] libmachine: Parsing certificate...
	I0719 16:37:34.060767    3767 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:37:34.060786    3767 main.go:141] libmachine: Decoding PEM data...
	I0719 16:37:34.060793    3767 main.go:141] libmachine: Parsing certificate...
	I0719 16:37:34.061129    3767 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:37:34.174938    3767 main.go:141] libmachine: Creating SSH key...
	I0719 16:37:34.251694    3767 main.go:141] libmachine: Creating Disk image...
	I0719 16:37:34.251700    3767 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:37:34.251875    3767 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/docker-flags-538000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/docker-flags-538000/disk.qcow2
	I0719 16:37:34.260295    3767 main.go:141] libmachine: STDOUT: 
	I0719 16:37:34.260313    3767 main.go:141] libmachine: STDERR: 
	I0719 16:37:34.260365    3767 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/docker-flags-538000/disk.qcow2 +20000M
	I0719 16:37:34.267393    3767 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:37:34.267404    3767 main.go:141] libmachine: STDERR: 
	I0719 16:37:34.267429    3767 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/docker-flags-538000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/docker-flags-538000/disk.qcow2
	I0719 16:37:34.267441    3767 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:37:34.267487    3767 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/docker-flags-538000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/docker-flags-538000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/docker-flags-538000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:23:6e:47:5a:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/docker-flags-538000/disk.qcow2
	I0719 16:37:34.268988    3767 main.go:141] libmachine: STDOUT: 
	I0719 16:37:34.269000    3767 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:37:34.269018    3767 client.go:171] LocalClient.Create took 208.405083ms
	I0719 16:37:36.271133    3767 start.go:128] duration metric: createHost completed in 2.230895959s
	I0719 16:37:36.271196    3767 start.go:83] releasing machines lock for "docker-flags-538000", held for 2.231003542s
	W0719 16:37:36.271315    3767 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:37:36.293613    3767 out.go:177] * Deleting "docker-flags-538000" in qemu2 ...
	W0719 16:37:36.309434    3767 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:37:36.309457    3767 start.go:687] Will try again in 5 seconds ...
	I0719 16:37:41.311667    3767 start.go:365] acquiring machines lock for docker-flags-538000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:37:41.312113    3767 start.go:369] acquired machines lock for "docker-flags-538000" in 358.416µs
	I0719 16:37:41.312229    3767 start.go:93] Provisioning new machine with config: &{Name:docker-flags-538000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.27.3 ClusterName:docker-flags-538000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:37:41.312521    3767 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:37:41.320889    3767 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0719 16:37:41.368061    3767 start.go:159] libmachine.API.Create for "docker-flags-538000" (driver="qemu2")
	I0719 16:37:41.368103    3767 client.go:168] LocalClient.Create starting
	I0719 16:37:41.368257    3767 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:37:41.368311    3767 main.go:141] libmachine: Decoding PEM data...
	I0719 16:37:41.368331    3767 main.go:141] libmachine: Parsing certificate...
	I0719 16:37:41.368414    3767 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:37:41.368446    3767 main.go:141] libmachine: Decoding PEM data...
	I0719 16:37:41.368465    3767 main.go:141] libmachine: Parsing certificate...
	I0719 16:37:41.369360    3767 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:37:41.497236    3767 main.go:141] libmachine: Creating SSH key...
	I0719 16:37:41.525688    3767 main.go:141] libmachine: Creating Disk image...
	I0719 16:37:41.525693    3767 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:37:41.525840    3767 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/docker-flags-538000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/docker-flags-538000/disk.qcow2
	I0719 16:37:41.534309    3767 main.go:141] libmachine: STDOUT: 
	I0719 16:37:41.534322    3767 main.go:141] libmachine: STDERR: 
	I0719 16:37:41.534374    3767 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/docker-flags-538000/disk.qcow2 +20000M
	I0719 16:37:41.541460    3767 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:37:41.541472    3767 main.go:141] libmachine: STDERR: 
	I0719 16:37:41.541483    3767 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/docker-flags-538000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/docker-flags-538000/disk.qcow2
	I0719 16:37:41.541488    3767 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:37:41.541533    3767 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/docker-flags-538000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/docker-flags-538000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/docker-flags-538000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:14:15:ad:f0:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/docker-flags-538000/disk.qcow2
	I0719 16:37:41.543029    3767 main.go:141] libmachine: STDOUT: 
	I0719 16:37:41.543040    3767 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:37:41.543052    3767 client.go:171] LocalClient.Create took 174.947917ms
	I0719 16:37:43.545172    3767 start.go:128] duration metric: createHost completed in 2.23266825s
	I0719 16:37:43.545235    3767 start.go:83] releasing machines lock for "docker-flags-538000", held for 2.233136791s
	W0719 16:37:43.545815    3767 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-538000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-538000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:37:43.552380    3767 out.go:177] 
	W0719 16:37:43.556411    3767 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:37:43.556437    3767 out.go:239] * 
	* 
	W0719 16:37:43.559162    3767 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:37:43.567223    3767 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-538000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-538000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-538000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 89 (75.548542ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-538000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-538000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 89
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-538000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-538000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-538000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-538000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 89 (40.718ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-538000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-538000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 89
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-538000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-538000\"\n"
panic.go:522: *** TestDockerFlags FAILED at 2023-07-19 16:37:43.699915 -0700 PDT m=+2791.749862501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-538000 -n docker-flags-538000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-538000 -n docker-flags-538000: exit status 7 (28.1855ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-538000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-538000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-538000
--- FAIL: TestDockerFlags (9.89s)

                                                
                                    
x
+
TestForceSystemdFlag (11.45s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-992000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-992000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.248600583s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-992000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-flag-992000 in cluster force-systemd-flag-992000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-992000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:37:27.350183    3742 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:37:27.350305    3742 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:37:27.350310    3742 out.go:309] Setting ErrFile to fd 2...
	I0719 16:37:27.350312    3742 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:37:27.350438    3742 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:37:27.351419    3742 out.go:303] Setting JSON to false
	I0719 16:37:27.366227    3742 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4018,"bootTime":1689805829,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:37:27.366314    3742 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:37:27.371669    3742 out.go:177] * [force-systemd-flag-992000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:37:27.378633    3742 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:37:27.382681    3742 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:37:27.378683    3742 notify.go:220] Checking for updates...
	I0719 16:37:27.388625    3742 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:37:27.391657    3742 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:37:27.394622    3742 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:37:27.397614    3742 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:37:27.401023    3742 config.go:182] Loaded profile config "force-systemd-env-908000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:37:27.401093    3742 config.go:182] Loaded profile config "multinode-992000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:37:27.401136    3742 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:37:27.405600    3742 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 16:37:27.412605    3742 start.go:298] selected driver: qemu2
	I0719 16:37:27.412609    3742 start.go:880] validating driver "qemu2" against <nil>
	I0719 16:37:27.412614    3742 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:37:27.414409    3742 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0719 16:37:27.417525    3742 out.go:177] * Automatically selected the socket_vmnet network
	I0719 16:37:27.420664    3742 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 16:37:27.420680    3742 cni.go:84] Creating CNI manager for ""
	I0719 16:37:27.420687    3742 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 16:37:27.420693    3742 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 16:37:27.420699    3742 start_flags.go:319] config:
	{Name:force-systemd-flag-992000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:force-systemd-flag-992000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:37:27.424674    3742 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:37:27.431596    3742 out.go:177] * Starting control plane node force-systemd-flag-992000 in cluster force-systemd-flag-992000
	I0719 16:37:27.435581    3742 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 16:37:27.435605    3742 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 16:37:27.435617    3742 cache.go:57] Caching tarball of preloaded images
	I0719 16:37:27.435677    3742 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 16:37:27.435683    3742 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0719 16:37:27.435740    3742 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/force-systemd-flag-992000/config.json ...
	I0719 16:37:27.435752    3742 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/force-systemd-flag-992000/config.json: {Name:mk70882a31a6bd465ec3f97f1cc7624e284c887a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:37:27.435952    3742 start.go:365] acquiring machines lock for force-systemd-flag-992000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:37:27.435987    3742 start.go:369] acquired machines lock for "force-systemd-flag-992000" in 26.25µs
	I0719 16:37:27.435999    3742 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-992000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:f
orce-systemd-flag-992000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:37:27.436029    3742 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:37:27.444603    3742 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0719 16:37:27.461677    3742 start.go:159] libmachine.API.Create for "force-systemd-flag-992000" (driver="qemu2")
	I0719 16:37:27.461702    3742 client.go:168] LocalClient.Create starting
	I0719 16:37:27.461758    3742 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:37:27.461784    3742 main.go:141] libmachine: Decoding PEM data...
	I0719 16:37:27.461792    3742 main.go:141] libmachine: Parsing certificate...
	I0719 16:37:27.461829    3742 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:37:27.461845    3742 main.go:141] libmachine: Decoding PEM data...
	I0719 16:37:27.461853    3742 main.go:141] libmachine: Parsing certificate...
	I0719 16:37:27.462201    3742 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:37:27.574903    3742 main.go:141] libmachine: Creating SSH key...
	I0719 16:37:27.649367    3742 main.go:141] libmachine: Creating Disk image...
	I0719 16:37:27.649374    3742 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:37:27.649523    3742 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-flag-992000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-flag-992000/disk.qcow2
	I0719 16:37:27.657979    3742 main.go:141] libmachine: STDOUT: 
	I0719 16:37:27.657995    3742 main.go:141] libmachine: STDERR: 
	I0719 16:37:27.658052    3742 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-flag-992000/disk.qcow2 +20000M
	I0719 16:37:27.665309    3742 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:37:27.665322    3742 main.go:141] libmachine: STDERR: 
	I0719 16:37:27.665350    3742 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-flag-992000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-flag-992000/disk.qcow2
	I0719 16:37:27.665357    3742 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:37:27.665412    3742 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-flag-992000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-flag-992000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-flag-992000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:5a:ba:fe:d7:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-flag-992000/disk.qcow2
	I0719 16:37:27.666952    3742 main.go:141] libmachine: STDOUT: 
	I0719 16:37:27.666964    3742 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:37:27.666982    3742 client.go:171] LocalClient.Create took 205.281041ms
	I0719 16:37:29.669095    3742 start.go:128] duration metric: createHost completed in 2.233089042s
	I0719 16:37:29.669192    3742 start.go:83] releasing machines lock for "force-systemd-flag-992000", held for 2.233203708s
	W0719 16:37:29.669256    3742 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:37:29.680318    3742 out.go:177] * Deleting "force-systemd-flag-992000" in qemu2 ...
	W0719 16:37:29.701677    3742 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:37:29.701706    3742 start.go:687] Will try again in 5 seconds ...
	I0719 16:37:34.703860    3742 start.go:365] acquiring machines lock for force-systemd-flag-992000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:37:36.271469    3742 start.go:369] acquired machines lock for "force-systemd-flag-992000" in 1.567450666s
	I0719 16:37:36.271603    3742 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-992000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:f
orce-systemd-flag-992000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:37:36.271957    3742 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:37:36.285585    3742 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0719 16:37:36.330904    3742 start.go:159] libmachine.API.Create for "force-systemd-flag-992000" (driver="qemu2")
	I0719 16:37:36.330957    3742 client.go:168] LocalClient.Create starting
	I0719 16:37:36.331113    3742 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:37:36.331156    3742 main.go:141] libmachine: Decoding PEM data...
	I0719 16:37:36.331176    3742 main.go:141] libmachine: Parsing certificate...
	I0719 16:37:36.331267    3742 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:37:36.331296    3742 main.go:141] libmachine: Decoding PEM data...
	I0719 16:37:36.331308    3742 main.go:141] libmachine: Parsing certificate...
	I0719 16:37:36.331813    3742 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:37:36.462515    3742 main.go:141] libmachine: Creating SSH key...
	I0719 16:37:36.514263    3742 main.go:141] libmachine: Creating Disk image...
	I0719 16:37:36.514268    3742 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:37:36.514414    3742 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-flag-992000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-flag-992000/disk.qcow2
	I0719 16:37:36.522962    3742 main.go:141] libmachine: STDOUT: 
	I0719 16:37:36.522977    3742 main.go:141] libmachine: STDERR: 
	I0719 16:37:36.523035    3742 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-flag-992000/disk.qcow2 +20000M
	I0719 16:37:36.530159    3742 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:37:36.530172    3742 main.go:141] libmachine: STDERR: 
	I0719 16:37:36.530185    3742 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-flag-992000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-flag-992000/disk.qcow2
	I0719 16:37:36.530189    3742 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:37:36.530222    3742 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-flag-992000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-flag-992000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-flag-992000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:45:62:39:29:38 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-flag-992000/disk.qcow2
	I0719 16:37:36.531829    3742 main.go:141] libmachine: STDOUT: 
	I0719 16:37:36.531841    3742 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:37:36.531853    3742 client.go:171] LocalClient.Create took 200.88975ms
	I0719 16:37:38.533987    3742 start.go:128] duration metric: createHost completed in 2.2620485s
	I0719 16:37:38.534053    3742 start.go:83] releasing machines lock for "force-systemd-flag-992000", held for 2.262562375s
	W0719 16:37:38.534473    3742 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-992000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-992000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:37:38.544071    3742 out.go:177] 
	W0719 16:37:38.549106    3742 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:37:38.549159    3742 out.go:239] * 
	* 
	W0719 16:37:38.551858    3742 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:37:38.558818    3742 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-992000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-992000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-992000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (77.437041ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-flag-992000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-992000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2023-07-19 16:37:38.652362 -0700 PDT m=+2786.702219335
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-992000 -n force-systemd-flag-992000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-992000 -n force-systemd-flag-992000: exit status 7 (33.441208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-992000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-992000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-992000
--- FAIL: TestForceSystemdFlag (11.45s)

                                                
                                    
x
+
TestForceSystemdEnv (10.04s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-908000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
E0719 16:37:24.147017    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/client.crt: no such file or directory
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-908000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.826458416s)

                                                
                                                
-- stdout --
	* [force-systemd-env-908000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-env-908000 in cluster force-systemd-env-908000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-908000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:37:23.916704    3723 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:37:23.916831    3723 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:37:23.916835    3723 out.go:309] Setting ErrFile to fd 2...
	I0719 16:37:23.916838    3723 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:37:23.916952    3723 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:37:23.918009    3723 out.go:303] Setting JSON to false
	I0719 16:37:23.933616    3723 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4014,"bootTime":1689805829,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:37:23.933684    3723 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:37:23.938171    3723 out.go:177] * [force-systemd-env-908000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:37:23.946228    3723 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:37:23.946243    3723 notify.go:220] Checking for updates...
	I0719 16:37:23.954184    3723 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:37:23.957204    3723 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:37:23.960175    3723 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:37:23.964143    3723 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:37:23.967191    3723 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0719 16:37:23.970442    3723 config.go:182] Loaded profile config "multinode-992000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:37:23.970487    3723 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:37:23.975153    3723 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 16:37:23.982150    3723 start.go:298] selected driver: qemu2
	I0719 16:37:23.982155    3723 start.go:880] validating driver "qemu2" against <nil>
	I0719 16:37:23.982160    3723 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:37:23.984041    3723 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0719 16:37:23.987136    3723 out.go:177] * Automatically selected the socket_vmnet network
	I0719 16:37:23.990218    3723 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 16:37:23.990231    3723 cni.go:84] Creating CNI manager for ""
	I0719 16:37:23.990236    3723 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 16:37:23.990239    3723 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 16:37:23.990244    3723 start_flags.go:319] config:
	{Name:force-systemd-env-908000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:force-systemd-env-908000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:37:23.993956    3723 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:37:24.001173    3723 out.go:177] * Starting control plane node force-systemd-env-908000 in cluster force-systemd-env-908000
	I0719 16:37:24.005149    3723 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 16:37:24.005170    3723 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 16:37:24.005177    3723 cache.go:57] Caching tarball of preloaded images
	I0719 16:37:24.005224    3723 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 16:37:24.005229    3723 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0719 16:37:24.005273    3723 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/force-systemd-env-908000/config.json ...
	I0719 16:37:24.005283    3723 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/force-systemd-env-908000/config.json: {Name:mkcbbf57234614dc256adcbb4f7e1729e96c3954 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:37:24.005473    3723 start.go:365] acquiring machines lock for force-systemd-env-908000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:37:24.005500    3723 start.go:369] acquired machines lock for "force-systemd-env-908000" in 21.042µs
	I0719 16:37:24.005510    3723 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:fo
rce-systemd-env-908000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:37:24.005531    3723 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:37:24.013973    3723 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0719 16:37:24.027516    3723 start.go:159] libmachine.API.Create for "force-systemd-env-908000" (driver="qemu2")
	I0719 16:37:24.027539    3723 client.go:168] LocalClient.Create starting
	I0719 16:37:24.027591    3723 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:37:24.027611    3723 main.go:141] libmachine: Decoding PEM data...
	I0719 16:37:24.027621    3723 main.go:141] libmachine: Parsing certificate...
	I0719 16:37:24.027665    3723 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:37:24.027679    3723 main.go:141] libmachine: Decoding PEM data...
	I0719 16:37:24.027684    3723 main.go:141] libmachine: Parsing certificate...
	I0719 16:37:24.027963    3723 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:37:24.142687    3723 main.go:141] libmachine: Creating SSH key...
	I0719 16:37:24.349428    3723 main.go:141] libmachine: Creating Disk image...
	I0719 16:37:24.349438    3723 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:37:24.349615    3723 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-env-908000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-env-908000/disk.qcow2
	I0719 16:37:24.358960    3723 main.go:141] libmachine: STDOUT: 
	I0719 16:37:24.358982    3723 main.go:141] libmachine: STDERR: 
	I0719 16:37:24.359082    3723 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-env-908000/disk.qcow2 +20000M
	I0719 16:37:24.367401    3723 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:37:24.367421    3723 main.go:141] libmachine: STDERR: 
	I0719 16:37:24.367449    3723 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-env-908000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-env-908000/disk.qcow2
	I0719 16:37:24.367457    3723 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:37:24.367513    3723 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-env-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-env-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-env-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:4f:2d:b5:0c:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-env-908000/disk.qcow2
	I0719 16:37:24.369402    3723 main.go:141] libmachine: STDOUT: 
	I0719 16:37:24.369414    3723 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:37:24.369435    3723 client.go:171] LocalClient.Create took 341.89925ms
	I0719 16:37:26.371597    3723 start.go:128] duration metric: createHost completed in 2.366076s
	I0719 16:37:26.371714    3723 start.go:83] releasing machines lock for "force-systemd-env-908000", held for 2.366246167s
	W0719 16:37:26.371820    3723 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:37:26.379150    3723 out.go:177] * Deleting "force-systemd-env-908000" in qemu2 ...
	W0719 16:37:26.399291    3723 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:37:26.399323    3723 start.go:687] Will try again in 5 seconds ...
	I0719 16:37:31.401548    3723 start.go:365] acquiring machines lock for force-systemd-env-908000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:37:31.401998    3723 start.go:369] acquired machines lock for "force-systemd-env-908000" in 346µs
	I0719 16:37:31.402148    3723 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-908000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:fo
rce-systemd-env-908000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:37:31.402441    3723 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:37:31.412121    3723 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0719 16:37:31.459405    3723 start.go:159] libmachine.API.Create for "force-systemd-env-908000" (driver="qemu2")
	I0719 16:37:31.459444    3723 client.go:168] LocalClient.Create starting
	I0719 16:37:31.459578    3723 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:37:31.459625    3723 main.go:141] libmachine: Decoding PEM data...
	I0719 16:37:31.459644    3723 main.go:141] libmachine: Parsing certificate...
	I0719 16:37:31.459712    3723 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:37:31.459741    3723 main.go:141] libmachine: Decoding PEM data...
	I0719 16:37:31.459755    3723 main.go:141] libmachine: Parsing certificate...
	I0719 16:37:31.460456    3723 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:37:31.587798    3723 main.go:141] libmachine: Creating SSH key...
	I0719 16:37:31.658744    3723 main.go:141] libmachine: Creating Disk image...
	I0719 16:37:31.658752    3723 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:37:31.658913    3723 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-env-908000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-env-908000/disk.qcow2
	I0719 16:37:31.667429    3723 main.go:141] libmachine: STDOUT: 
	I0719 16:37:31.667442    3723 main.go:141] libmachine: STDERR: 
	I0719 16:37:31.667496    3723 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-env-908000/disk.qcow2 +20000M
	I0719 16:37:31.674652    3723 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:37:31.674664    3723 main.go:141] libmachine: STDERR: 
	I0719 16:37:31.674677    3723 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-env-908000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-env-908000/disk.qcow2
	I0719 16:37:31.674684    3723 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:37:31.674717    3723 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-env-908000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-env-908000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-env-908000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:db:87:3a:fb:29 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/force-systemd-env-908000/disk.qcow2
	I0719 16:37:31.676244    3723 main.go:141] libmachine: STDOUT: 
	I0719 16:37:31.676267    3723 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:37:31.676278    3723 client.go:171] LocalClient.Create took 216.834167ms
	I0719 16:37:33.678428    3723 start.go:128] duration metric: createHost completed in 2.275998625s
	I0719 16:37:33.678478    3723 start.go:83] releasing machines lock for "force-systemd-env-908000", held for 2.2764965s
	W0719 16:37:33.678834    3723 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-908000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-908000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:37:33.687559    3723 out.go:177] 
	W0719 16:37:33.691498    3723 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:37:33.691521    3723 out.go:239] * 
	* 
	W0719 16:37:33.694168    3723 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:37:33.702507    3723 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-908000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-908000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-908000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (80.14775ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-env-908000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-908000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2023-07-19 16:37:33.797837 -0700 PDT m=+2781.847608460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-908000 -n force-systemd-env-908000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-908000 -n force-systemd-env-908000: exit status 7 (34.024667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-908000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-908000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-908000
--- FAIL: TestForceSystemdEnv (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (39.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-001000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-001000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58d66798bb-xlbs9" [7f191746-b65e-42b3-8776-cff449eb810c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-58d66798bb-xlbs9" [7f191746-b65e-42b3-8776-cff449eb810c] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0719 16:28:30.202537    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt: no such file or directory
E0719 16:28:30.210076    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt: no such file or directory
E0719 16:28:30.222128    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt: no such file or directory
E0719 16:28:30.244211    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt: no such file or directory
E0719 16:28:30.286389    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt: no such file or directory
E0719 16:28:30.368531    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt: no such file or directory
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.013151708s
functional_test.go:1648: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.105.4:32070
functional_test.go:1660: error fetching http://192.168.105.4:32070: Get "http://192.168.105.4:32070": dial tcp 192.168.105.4:32070: connect: connection refused
E0719 16:28:35.338629    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt: no such file or directory
functional_test.go:1660: error fetching http://192.168.105.4:32070: Get "http://192.168.105.4:32070": dial tcp 192.168.105.4:32070: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32070: Get "http://192.168.105.4:32070": dial tcp 192.168.105.4:32070: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32070: Get "http://192.168.105.4:32070": dial tcp 192.168.105.4:32070: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32070: Get "http://192.168.105.4:32070": dial tcp 192.168.105.4:32070: connect: connection refused
2023/07/19 16:28:47 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1660: error fetching http://192.168.105.4:32070: Get "http://192.168.105.4:32070": dial tcp 192.168.105.4:32070: connect: connection refused
E0719 16:28:50.702850    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt: no such file or directory
functional_test.go:1660: error fetching http://192.168.105.4:32070: Get "http://192.168.105.4:32070": dial tcp 192.168.105.4:32070: connect: connection refused
functional_test.go:1680: failed to fetch http://192.168.105.4:32070: Get "http://192.168.105.4:32070": dial tcp 192.168.105.4:32070: connect: connection refused
functional_test.go:1597: service test failed - dumping debug information
functional_test.go:1598: -----------------------service failure post-mortem--------------------------------
functional_test.go:1601: (dbg) Run:  kubectl --context functional-001000 describe po hello-node-connect
functional_test.go:1605: hello-node pod describe:
Name:             hello-node-connect-58d66798bb-xlbs9
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-001000/192.168.105.4
Start Time:       Wed, 19 Jul 2023 16:28:22 -0700
Labels:           app=hello-node-connect
pod-template-hash=58d66798bb
Annotations:      <none>
Status:           Running
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/hello-node-connect-58d66798bb
Containers:
echoserver-arm:
Container ID:   docker://a51669e0da4a39f8394d184d3ab1685e643724f0314cb575f21437cf05fd3749
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Wed, 19 Jul 2023 16:28:46 -0700
Finished:     Wed, 19 Jul 2023 16:28:46 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v6tmk (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-v6tmk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  38s                default-scheduler  Successfully assigned default/hello-node-connect-58d66798bb-xlbs9 to functional-001000
Normal   Pulling    37s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     32s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 5.188487097s (5.533792377s including waiting)
Normal   Created    14s (x3 over 31s)  kubelet            Created container echoserver-arm
Normal   Started    14s (x3 over 31s)  kubelet            Started container echoserver-arm
Normal   Pulled     14s (x2 over 31s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    12s (x4 over 30s)  kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-58d66798bb-xlbs9_default(7f191746-b65e-42b3-8776-cff449eb810c)

                                                
                                                
functional_test.go:1607: (dbg) Run:  kubectl --context functional-001000 logs -l app=hello-node-connect
functional_test.go:1611: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1613: (dbg) Run:  kubectl --context functional-001000 describe svc hello-node-connect
functional_test.go:1617: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.101.194.133
IPs:                      10.101.194.133
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32070/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-001000 -n functional-001000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                 Args                 |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-001000 ssh findmnt        | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT |                     |
	|                | -T /mount1                           |                   |         |         |                     |                     |
	| ssh            | functional-001000 ssh findmnt        | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | -T /mount1                           |                   |         |         |                     |                     |
	| ssh            | functional-001000 ssh findmnt        | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | -T /mount2                           |                   |         |         |                     |                     |
	| ssh            | functional-001000 ssh findmnt        | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | -T /mount3                           |                   |         |         |                     |                     |
	| mount          | -p functional-001000                 | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT |                     |
	|                | --kill=true                          |                   |         |         |                     |                     |
	| service        | functional-001000 service            | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | hello-node-connect --url             |                   |         |         |                     |                     |
	| service        | functional-001000 service list       | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	| service        | functional-001000 service list       | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | -o json                              |                   |         |         |                     |                     |
	| service        | functional-001000 service            | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | --namespace=default --https          |                   |         |         |                     |                     |
	|                | --url hello-node                     |                   |         |         |                     |                     |
	| service        | functional-001000                    | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | service hello-node --url             |                   |         |         |                     |                     |
	|                | --format={{.IP}}                     |                   |         |         |                     |                     |
	| service        | functional-001000 service            | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | hello-node --url                     |                   |         |         |                     |                     |
	| start          | -p functional-001000                 | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT |                     |
	|                | --dry-run --memory                   |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr              |                   |         |         |                     |                     |
	|                | --driver=qemu2                       |                   |         |         |                     |                     |
	| start          | -p functional-001000                 | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT |                     |
	|                | --dry-run --memory                   |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr              |                   |         |         |                     |                     |
	|                | --driver=qemu2                       |                   |         |         |                     |                     |
	| start          | -p functional-001000 --dry-run       | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT |                     |
	|                | --alsologtostderr -v=1               |                   |         |         |                     |                     |
	|                | --driver=qemu2                       |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                   | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | -p functional-001000                 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1               |                   |         |         |                     |                     |
	| image          | functional-001000                    | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | image ls --format short              |                   |         |         |                     |                     |
	|                | --alsologtostderr                    |                   |         |         |                     |                     |
	| image          | functional-001000                    | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | image ls --format yaml               |                   |         |         |                     |                     |
	|                | --alsologtostderr                    |                   |         |         |                     |                     |
	| ssh            | functional-001000 ssh pgrep          | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT |                     |
	|                | buildkitd                            |                   |         |         |                     |                     |
	| image          | functional-001000 image build -t     | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | localhost/my-image:functional-001000 |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr     |                   |         |         |                     |                     |
	| image          | functional-001000 image ls           | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	| image          | functional-001000                    | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | image ls --format json               |                   |         |         |                     |                     |
	|                | --alsologtostderr                    |                   |         |         |                     |                     |
	| image          | functional-001000                    | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | image ls --format table              |                   |         |         |                     |                     |
	|                | --alsologtostderr                    |                   |         |         |                     |                     |
	| update-context | functional-001000                    | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | update-context                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                   |         |         |                     |                     |
	| update-context | functional-001000                    | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | update-context                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                   |         |         |                     |                     |
	| update-context | functional-001000                    | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | update-context                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                   |         |         |                     |                     |
	|----------------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/19 16:28:38
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 16:28:38.354677    2772 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:28:38.354794    2772 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:28:38.354797    2772 out.go:309] Setting ErrFile to fd 2...
	I0719 16:28:38.354799    2772 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:28:38.354919    2772 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:28:38.355926    2772 out.go:303] Setting JSON to false
	I0719 16:28:38.371872    2772 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3489,"bootTime":1689805829,"procs":421,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:28:38.371959    2772 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:28:38.377163    2772 out.go:177] * [functional-001000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:28:38.380115    2772 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:28:38.380159    2772 notify.go:220] Checking for updates...
	I0719 16:28:38.387105    2772 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:28:38.390183    2772 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:28:38.393142    2772 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:28:38.395997    2772 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:28:38.399132    2772 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:28:38.402377    2772 config.go:182] Loaded profile config "functional-001000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:28:38.402642    2772 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:28:38.406059    2772 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 16:28:38.413086    2772 start.go:298] selected driver: qemu2
	I0719 16:28:38.413089    2772 start.go:880] validating driver "qemu2" against &{Name:functional-001000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-0
01000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Moun
tString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:28:38.413127    2772 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:28:38.414860    2772 cni.go:84] Creating CNI manager for ""
	I0719 16:28:38.414872    2772 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 16:28:38.414878    2772 start_flags.go:319] config:
	{Name:functional-001000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-001000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:28:38.426095    2772 out.go:177] * dry-run validation complete!
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-07-19 23:25:49 UTC, ends at Wed 2023-07-19 23:29:01 UTC. --
	Jul 19 23:28:45 functional-001000 dockerd[6893]: time="2023-07-19T23:28:45.835905166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 23:28:45 functional-001000 dockerd[6893]: time="2023-07-19T23:28:45.835948625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 23:28:45 functional-001000 dockerd[6893]: time="2023-07-19T23:28:45.835984291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 23:28:45 functional-001000 dockerd[6893]: time="2023-07-19T23:28:45.835996000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 23:28:45 functional-001000 dockerd[6887]: time="2023-07-19T23:28:45.890947220Z" level=info msg="ignoring event" container=6db9649ad818549b8503c40071e0e6bf323a37876827a4f8471787434f49e25e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 23:28:45 functional-001000 dockerd[6893]: time="2023-07-19T23:28:45.891150179Z" level=info msg="shim disconnected" id=6db9649ad818549b8503c40071e0e6bf323a37876827a4f8471787434f49e25e namespace=moby
	Jul 19 23:28:45 functional-001000 dockerd[6893]: time="2023-07-19T23:28:45.891276012Z" level=warning msg="cleaning up after shim disconnected" id=6db9649ad818549b8503c40071e0e6bf323a37876827a4f8471787434f49e25e namespace=moby
	Jul 19 23:28:45 functional-001000 dockerd[6893]: time="2023-07-19T23:28:45.891295637Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 23:28:46 functional-001000 dockerd[6893]: time="2023-07-19T23:28:46.835605306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 23:28:46 functional-001000 dockerd[6893]: time="2023-07-19T23:28:46.835670306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 23:28:46 functional-001000 dockerd[6893]: time="2023-07-19T23:28:46.835694973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 23:28:46 functional-001000 dockerd[6893]: time="2023-07-19T23:28:46.835712265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 23:28:46 functional-001000 dockerd[6887]: time="2023-07-19T23:28:46.895866228Z" level=info msg="ignoring event" container=a51669e0da4a39f8394d184d3ab1685e643724f0314cb575f21437cf05fd3749 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 23:28:46 functional-001000 dockerd[6893]: time="2023-07-19T23:28:46.896156478Z" level=info msg="shim disconnected" id=a51669e0da4a39f8394d184d3ab1685e643724f0314cb575f21437cf05fd3749 namespace=moby
	Jul 19 23:28:46 functional-001000 dockerd[6893]: time="2023-07-19T23:28:46.896227770Z" level=warning msg="cleaning up after shim disconnected" id=a51669e0da4a39f8394d184d3ab1685e643724f0314cb575f21437cf05fd3749 namespace=moby
	Jul 19 23:28:46 functional-001000 dockerd[6893]: time="2023-07-19T23:28:46.896246353Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 23:28:50 functional-001000 dockerd[6893]: time="2023-07-19T23:28:50.623957747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 23:28:50 functional-001000 dockerd[6893]: time="2023-07-19T23:28:50.623990581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 23:28:50 functional-001000 dockerd[6893]: time="2023-07-19T23:28:50.623998997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 23:28:50 functional-001000 dockerd[6893]: time="2023-07-19T23:28:50.624203707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 23:28:50 functional-001000 dockerd[6887]: time="2023-07-19T23:28:50.762945942Z" level=info msg="ignoring event" container=ae3ab9a590efd864f1ee6aa80555f35dd5b84389d86bc56c1d8a9d4caa71aa99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 23:28:50 functional-001000 dockerd[6893]: time="2023-07-19T23:28:50.763327986Z" level=info msg="shim disconnected" id=ae3ab9a590efd864f1ee6aa80555f35dd5b84389d86bc56c1d8a9d4caa71aa99 namespace=moby
	Jul 19 23:28:50 functional-001000 dockerd[6893]: time="2023-07-19T23:28:50.763529695Z" level=warning msg="cleaning up after shim disconnected" id=ae3ab9a590efd864f1ee6aa80555f35dd5b84389d86bc56c1d8a9d4caa71aa99 namespace=moby
	Jul 19 23:28:50 functional-001000 dockerd[6893]: time="2023-07-19T23:28:50.763538779Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 23:28:50 functional-001000 dockerd[6887]: time="2023-07-19T23:28:50.862682209Z" level=info msg="Layer sha256:c81cf61a7a8dc20bd4d14f852d078f1935f3e50c7027f875c42405815352332f cleaned up"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                  CREATED              STATE               NAME                        ATTEMPT             POD ID
	a51669e0da4a3       72565bf5bbedf                                                                                          15 seconds ago       Exited              echoserver-arm              2                   34d704c481324
	6db9649ad8185       72565bf5bbedf                                                                                          16 seconds ago       Exited              echoserver-arm              2                   54dd9b724c380
	4a1a8ef78a6b7       kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   16 seconds ago       Running             dashboard-metrics-scraper   0                   af231622bed3e
	ca5c2b98e8094       kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         18 seconds ago       Running             kubernetes-dashboard        0                   395cce4ece674
	06d07c456e6a5       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e    38 seconds ago       Exited              mount-munger                0                   9c8a1e8f5874e
	b9d7489c7c9d1       nginx@sha256:08bc36ad52474e528cc1ea3426b5e3f4bad8a130318e3140d6cfe29c8892c7ef                          46 seconds ago       Running             myfrontend                  0                   d01d5dedca1ed
	4b0b040421356       nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6                          47 seconds ago       Running             nginx                       0                   0e3de77f3e58a
	07e1441f12b2b       97e04611ad434                                                                                          About a minute ago   Running             coredns                     2                   bbee310b87737
	47603d4108c8a       ba04bb24b9575                                                                                          About a minute ago   Running             storage-provisioner         2                   4aa4d337664a0
	5c133901891ae       fb73e92641fd5                                                                                          About a minute ago   Running             kube-proxy                  2                   3aafd82ac9d94
	332a779a70778       39dfb036b0986                                                                                          About a minute ago   Running             kube-apiserver              0                   e758544d71e21
	f56037f02c9db       24bc64e911039                                                                                          About a minute ago   Running             etcd                        2                   4e6bf19d8db8b
	2fbe350d0c457       ab3683b584ae5                                                                                          About a minute ago   Running             kube-controller-manager     2                   c50d94589c9fc
	083206f5500f6       bcb9e554eaab6                                                                                          About a minute ago   Running             kube-scheduler              2                   81f6e364a676d
	5cb5d36e41848       ba04bb24b9575                                                                                          2 minutes ago        Exited              storage-provisioner         1                   de082a14a1e78
	2a02c5c38cb11       fb73e92641fd5                                                                                          2 minutes ago        Exited              kube-proxy                  1                   2eb5bd523e6be
	d44530f5da192       97e04611ad434                                                                                          2 minutes ago        Exited              coredns                     1                   008350ad70613
	498e90fee0f95       bcb9e554eaab6                                                                                          2 minutes ago        Exited              kube-scheduler              1                   87f4213230ebf
	308120d05999f       24bc64e911039                                                                                          2 minutes ago        Exited              etcd                        1                   49e4917a3b008
	1cc87be3c3b96       ab3683b584ae5                                                                                          2 minutes ago        Exited              kube-controller-manager     1                   8a9626f0b0d1c
	
	* 
	* ==> coredns [07e1441f12b2] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53601 - 46376 "HINFO IN 2921638397502402876.3063468808989546331. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.188580059s
	[INFO] 10.244.0.1:54447 - 47326 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000111661s
	[INFO] 10.244.0.1:47645 - 45366 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000103786s
	[INFO] 10.244.0.1:26160 - 36355 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000045206s
	[INFO] 10.244.0.1:28182 - 4046 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001435131s
	[INFO] 10.244.0.1:5841 - 42381 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000066538s
	[INFO] 10.244.0.1:57526 - 37569 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000091037s
	
	* 
	* ==> coredns [d44530f5da19] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36161 - 11535 "HINFO IN 4930080070669016046.4625676989498275589. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.304515134s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-001000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-001000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd270bde4b7e946c995f2329996587ae45fe53d4
	                    minikube.k8s.io/name=functional-001000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_19T16_26_06_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Jul 2023 23:26:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-001000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Jul 2023 23:28:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Jul 2023 23:28:37 +0000   Wed, 19 Jul 2023 23:26:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Jul 2023 23:28:37 +0000   Wed, 19 Jul 2023 23:26:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Jul 2023 23:28:37 +0000   Wed, 19 Jul 2023 23:26:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Jul 2023 23:28:37 +0000   Wed, 19 Jul 2023 23:26:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-001000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 4dce0ada8b7549b781b6731a15154251
	  System UUID:                4dce0ada8b7549b781b6731a15154251
	  Boot ID:                    6e89b4e9-df63-4490-9a2b-af9c73654745
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-7b684b55f9-dhknm                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  default                     hello-node-connect-58d66798bb-xlbs9           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39s
	  default                     nginx-svc                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         51s
	  default                     sp-pod                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         47s
	  kube-system                 coredns-5d78c9869d-jnpdk                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m42s
	  kube-system                 etcd-functional-001000                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m55s
	  kube-system                 kube-apiserver-functional-001000              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 kube-controller-manager-functional-001000     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m55s
	  kube-system                 kube-proxy-xsbw7                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m43s
	  kube-system                 kube-scheduler-functional-001000              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m55s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m41s
	  kubernetes-dashboard        dashboard-metrics-scraper-5dd9cbfd69-t65x6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22s
	  kubernetes-dashboard        kubernetes-dashboard-5c5cfc8747-fhb6l         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m41s              kube-proxy       
	  Normal  Starting                 83s                kube-proxy       
	  Normal  Starting                 2m10s              kube-proxy       
	  Normal  Starting                 2m55s              kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m55s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m55s              kubelet          Node functional-001000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m55s              kubelet          Node functional-001000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m55s              kubelet          Node functional-001000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m52s              kubelet          Node functional-001000 status is now: NodeReady
	  Normal  RegisteredNode           2m43s              node-controller  Node functional-001000 event: Registered Node functional-001000 in Controller
	  Normal  RegisteredNode           118s               node-controller  Node functional-001000 event: Registered Node functional-001000 in Controller
	  Normal  Starting                 89s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  89s (x8 over 89s)  kubelet          Node functional-001000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    89s (x8 over 89s)  kubelet          Node functional-001000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     89s (x7 over 89s)  kubelet          Node functional-001000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  89s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           73s                node-controller  Node functional-001000 event: Registered Node functional-001000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.099027] systemd-fstab-generator[3970]: Ignoring "noauto" for root device
	[  +0.108407] systemd-fstab-generator[3983]: Ignoring "noauto" for root device
	[ +11.378100] systemd-fstab-generator[4531]: Ignoring "noauto" for root device
	[  +0.081007] systemd-fstab-generator[4542]: Ignoring "noauto" for root device
	[  +0.078313] systemd-fstab-generator[4553]: Ignoring "noauto" for root device
	[  +0.080195] systemd-fstab-generator[4575]: Ignoring "noauto" for root device
	[  +0.096024] systemd-fstab-generator[4646]: Ignoring "noauto" for root device
	[  +6.724539] kauditd_printk_skb: 34 callbacks suppressed
	[Jul19 23:27] systemd-fstab-generator[6416]: Ignoring "noauto" for root device
	[  +0.153573] systemd-fstab-generator[6448]: Ignoring "noauto" for root device
	[  +0.115519] systemd-fstab-generator[6459]: Ignoring "noauto" for root device
	[  +0.107607] systemd-fstab-generator[6472]: Ignoring "noauto" for root device
	[ +11.458077] systemd-fstab-generator[7051]: Ignoring "noauto" for root device
	[  +0.085168] systemd-fstab-generator[7062]: Ignoring "noauto" for root device
	[  +0.090202] systemd-fstab-generator[7073]: Ignoring "noauto" for root device
	[  +0.082221] systemd-fstab-generator[7084]: Ignoring "noauto" for root device
	[  +0.106586] systemd-fstab-generator[7154]: Ignoring "noauto" for root device
	[  +1.039401] systemd-fstab-generator[7406]: Ignoring "noauto" for root device
	[  +5.671711] kauditd_printk_skb: 29 callbacks suppressed
	[ +16.384179] kauditd_printk_skb: 3 callbacks suppressed
	[Jul19 23:28] kauditd_printk_skb: 10 callbacks suppressed
	[  +9.226177] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +3.451382] kauditd_printk_skb: 8 callbacks suppressed
	[ +10.510223] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.821884] kauditd_printk_skb: 6 callbacks suppressed
	
	* 
	* ==> etcd [308120d05999] <==
	* {"level":"info","ts":"2023-07-19T23:26:49.316Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-19T23:26:49.316Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-19T23:26:49.316Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-19T23:26:49.316Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-07-19T23:26:49.316Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-07-19T23:26:50.579Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2023-07-19T23:26:50.580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-07-19T23:26:50.580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2023-07-19T23:26:50.580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2023-07-19T23:26:50.580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-07-19T23:26:50.580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2023-07-19T23:26:50.580Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-07-19T23:26:50.584Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-19T23:26:50.585Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-19T23:26:50.586Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-19T23:26:50.587Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-19T23:26:50.584Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-001000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-19T23:26:50.587Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-19T23:26:50.587Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-07-19T23:27:19.869Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-07-19T23:27:19.869Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"functional-001000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"info","ts":"2023-07-19T23:27:19.876Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2023-07-19T23:27:19.877Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-07-19T23:27:19.879Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-07-19T23:27:19.879Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"functional-001000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	* 
	* ==> etcd [f56037f02c9d] <==
	* {"level":"info","ts":"2023-07-19T23:27:33.892Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-19T23:27:33.892Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-19T23:27:33.892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2023-07-19T23:27:33.892Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-07-19T23:27:33.892Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-19T23:27:33.892Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-19T23:27:33.898Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-07-19T23:27:33.899Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-19T23:27:33.899Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-07-19T23:27:33.901Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-19T23:27:33.901Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-07-19T23:27:35.539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2023-07-19T23:27:35.539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-07-19T23:27:35.539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-07-19T23:27:35.539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2023-07-19T23:27:35.539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-07-19T23:27:35.539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2023-07-19T23:27:35.540Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-07-19T23:27:35.545Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-001000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-19T23:27:35.545Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-19T23:27:35.545Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-19T23:27:35.545Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-19T23:27:35.545Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-19T23:27:35.548Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-07-19T23:27:35.548Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.4:2379"}
	
	* 
	* ==> kernel <==
	*  23:29:01 up 3 min,  0 users,  load average: 1.21, 0.45, 0.17
	Linux functional-001000 5.10.57 #1 SMP PREEMPT Fri Jul 14 22:49:12 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [332a779a7077] <==
	* I0719 23:27:36.271639       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0719 23:27:36.272044       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0719 23:27:36.272068       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0719 23:27:36.282671       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0719 23:27:36.282686       1 aggregator.go:152] initial CRD sync complete...
	I0719 23:27:36.282689       1 autoregister_controller.go:141] Starting autoregister controller
	I0719 23:27:36.282692       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0719 23:27:36.282694       1 cache.go:39] Caches are synced for autoregister controller
	I0719 23:27:37.043189       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0719 23:27:37.174726       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0719 23:27:37.853512       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0719 23:27:37.857805       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0719 23:27:37.869513       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0719 23:27:37.877527       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0719 23:27:37.880114       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0719 23:27:48.338726       1 controller.go:624] quota admission added evaluator for: endpoints
	I0719 23:27:48.373680       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0719 23:27:51.522223       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs=map[IPv4:10.99.121.138]
	I0719 23:28:10.801934       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.108.59.90]
	I0719 23:28:22.988012       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0719 23:28:23.031664       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.101.194.133]
	I0719 23:28:28.506628       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.110.156.49]
	I0719 23:28:38.941225       1 controller.go:624] quota admission added evaluator for: namespaces
	I0719 23:28:39.012170       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs=map[IPv4:10.110.187.189]
	I0719 23:28:39.047175       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs=map[IPv4:10.109.42.125]
	
	* 
	* ==> kube-controller-manager [1cc87be3c3b9] <==
	* I0719 23:27:03.474065       1 shared_informer.go:318] Caches are synced for namespace
	I0719 23:27:03.477196       1 shared_informer.go:318] Caches are synced for HPA
	I0719 23:27:03.478237       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0719 23:27:03.479296       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0719 23:27:03.479327       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0719 23:27:03.479337       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0719 23:27:03.480419       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0719 23:27:03.481493       1 shared_informer.go:318] Caches are synced for ephemeral
	I0719 23:27:03.482552       1 shared_informer.go:318] Caches are synced for attach detach
	I0719 23:27:03.483675       1 shared_informer.go:318] Caches are synced for job
	I0719 23:27:03.484779       1 shared_informer.go:318] Caches are synced for taint
	I0719 23:27:03.484832       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0719 23:27:03.484884       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-001000"
	I0719 23:27:03.484918       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0719 23:27:03.484832       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0719 23:27:03.484954       1 taint_manager.go:211] "Sending events to api server"
	I0719 23:27:03.485017       1 event.go:307] "Event occurred" object="functional-001000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-001000 event: Registered Node functional-001000 in Controller"
	I0719 23:27:03.487821       1 shared_informer.go:318] Caches are synced for disruption
	I0719 23:27:03.533049       1 shared_informer.go:318] Caches are synced for endpoint
	I0719 23:27:03.534153       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0719 23:27:03.656376       1 shared_informer.go:318] Caches are synced for resource quota
	I0719 23:27:03.672357       1 shared_informer.go:318] Caches are synced for resource quota
	I0719 23:27:03.990852       1 shared_informer.go:318] Caches are synced for garbage collector
	I0719 23:27:03.995085       1 shared_informer.go:318] Caches are synced for garbage collector
	I0719 23:27:03.995249       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-controller-manager [2fbe350d0c45] <==
	* I0719 23:27:48.990260       1 shared_informer.go:318] Caches are synced for garbage collector
	I0719 23:27:48.990379       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0719 23:28:00.864105       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0719 23:28:22.989355       1 event.go:307] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-58d66798bb to 1"
	I0719 23:28:22.996494       1 event.go:307] "Event occurred" object="default/hello-node-connect-58d66798bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-58d66798bb-xlbs9"
	I0719 23:28:28.466053       1 event.go:307] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-7b684b55f9 to 1"
	I0719 23:28:28.468208       1 event.go:307] "Event occurred" object="default/hello-node-7b684b55f9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-7b684b55f9-dhknm"
	I0719 23:28:38.963282       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5dd9cbfd69 to 1"
	I0719 23:28:38.968346       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5dd9cbfd69-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0719 23:28:38.970943       1 replica_set.go:544] sync "kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" failed with pods "dashboard-metrics-scraper-5dd9cbfd69-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0719 23:28:38.972522       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-5c5cfc8747 to 1"
	E0719 23:28:38.977867       1 replica_set.go:544] sync "kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" failed with pods "dashboard-metrics-scraper-5dd9cbfd69-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0719 23:28:38.977968       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5c5cfc8747-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0719 23:28:38.977975       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5dd9cbfd69-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0719 23:28:38.981166       1 replica_set.go:544] sync "kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" failed with pods "dashboard-metrics-scraper-5dd9cbfd69-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0719 23:28:38.981196       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5dd9cbfd69-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0719 23:28:38.983137       1 replica_set.go:544] sync "kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" failed with pods "kubernetes-dashboard-5c5cfc8747-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0719 23:28:38.986464       1 replica_set.go:544] sync "kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" failed with pods "kubernetes-dashboard-5c5cfc8747-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0719 23:28:38.986487       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5c5cfc8747-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0719 23:28:38.990943       1 replica_set.go:544] sync "kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" failed with pods "dashboard-metrics-scraper-5dd9cbfd69-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0719 23:28:38.991085       1 replica_set.go:544] sync "kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" failed with pods "kubernetes-dashboard-5c5cfc8747-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0719 23:28:38.991103       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-5dd9cbfd69-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0719 23:28:38.991152       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-5c5cfc8747-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0719 23:28:39.015589       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-5c5cfc8747-fhb6l"
	I0719 23:28:39.047520       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5dd9cbfd69-t65x6"
	
	* 
	* ==> kube-proxy [2a02c5c38cb1] <==
	* I0719 23:26:51.305720       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0719 23:26:51.312628       1 server_others.go:110] "Detected node IP" address="192.168.105.4"
	I0719 23:26:51.312705       1 server_others.go:554] "Using iptables proxy"
	I0719 23:26:51.330389       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0719 23:26:51.330405       1 server_others.go:192] "Using iptables Proxier"
	I0719 23:26:51.330421       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 23:26:51.330597       1 server.go:658] "Version info" version="v1.27.3"
	I0719 23:26:51.330606       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 23:26:51.331089       1 config.go:188] "Starting service config controller"
	I0719 23:26:51.331097       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0719 23:26:51.331104       1 config.go:97] "Starting endpoint slice config controller"
	I0719 23:26:51.331105       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0719 23:26:51.331227       1 config.go:315] "Starting node config controller"
	I0719 23:26:51.331234       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0719 23:26:51.431907       1 shared_informer.go:318] Caches are synced for node config
	I0719 23:26:51.431923       1 shared_informer.go:318] Caches are synced for service config
	I0719 23:26:51.431985       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [5c133901891a] <==
	* I0719 23:27:38.352112       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0719 23:27:38.352139       1 server_others.go:110] "Detected node IP" address="192.168.105.4"
	I0719 23:27:38.352147       1 server_others.go:554] "Using iptables proxy"
	I0719 23:27:38.370448       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0719 23:27:38.370481       1 server_others.go:192] "Using iptables Proxier"
	I0719 23:27:38.370503       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 23:27:38.370710       1 server.go:658] "Version info" version="v1.27.3"
	I0719 23:27:38.370718       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 23:27:38.371197       1 config.go:188] "Starting service config controller"
	I0719 23:27:38.371202       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0719 23:27:38.371211       1 config.go:97] "Starting endpoint slice config controller"
	I0719 23:27:38.371213       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0719 23:27:38.371380       1 config.go:315] "Starting node config controller"
	I0719 23:27:38.371382       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0719 23:27:38.471271       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0719 23:27:38.471286       1 shared_informer.go:318] Caches are synced for service config
	I0719 23:27:38.471517       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [083206f5500f] <==
	* I0719 23:27:34.181124       1 serving.go:348] Generated self-signed cert in-memory
	W0719 23:27:36.193691       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0719 23:27:36.193712       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 23:27:36.193716       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0719 23:27:36.193719       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0719 23:27:36.230142       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0719 23:27:36.230222       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 23:27:36.231453       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0719 23:27:36.234754       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 23:27:36.234824       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 23:27:36.236300       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0719 23:27:36.337041       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [498e90fee0f9] <==
	* I0719 23:26:49.940014       1 serving.go:348] Generated self-signed cert in-memory
	W0719 23:26:51.253987       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0719 23:26:51.254018       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 23:26:51.254028       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0719 23:26:51.254035       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0719 23:26:51.300319       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0719 23:26:51.300333       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 23:26:51.301328       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0719 23:26:51.301680       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 23:26:51.301690       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 23:26:51.301783       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0719 23:26:51.402157       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 23:27:19.904948       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0719 23:27:19.904971       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0719 23:27:19.905021       1 scheduling_queue.go:1135] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0719 23:27:19.905037       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-07-19 23:25:49 UTC, ends at Wed 2023-07-19 23:29:01 UTC. --
	Jul 19 23:28:32 functional-001000 kubelet[7412]: E0719 23:28:32.781390    7412 iptables.go:575] "Could not set up iptables canary" err=<
	Jul 19 23:28:32 functional-001000 kubelet[7412]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 23:28:32 functional-001000 kubelet[7412]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 23:28:32 functional-001000 kubelet[7412]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jul 19 23:28:32 functional-001000 kubelet[7412]: I0719 23:28:32.848061    7412 scope.go:115] "RemoveContainer" containerID="11ceff1f164b34e0efb061e157d09e41b3e921d2a5c528feac4cb1dff09a8a81"
	Jul 19 23:28:39 functional-001000 kubelet[7412]: I0719 23:28:39.021381    7412 topology_manager.go:212] "Topology Admit Handler"
	Jul 19 23:28:39 functional-001000 kubelet[7412]: I0719 23:28:39.063307    7412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/11f9ce53-35e9-4df6-8c28-607b42d0e48e-tmp-volume\") pod \"kubernetes-dashboard-5c5cfc8747-fhb6l\" (UID: \"11f9ce53-35e9-4df6-8c28-607b42d0e48e\") " pod="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747-fhb6l"
	Jul 19 23:28:39 functional-001000 kubelet[7412]: I0719 23:28:39.063361    7412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2ls7\" (UniqueName: \"kubernetes.io/projected/11f9ce53-35e9-4df6-8c28-607b42d0e48e-kube-api-access-b2ls7\") pod \"kubernetes-dashboard-5c5cfc8747-fhb6l\" (UID: \"11f9ce53-35e9-4df6-8c28-607b42d0e48e\") " pod="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747-fhb6l"
	Jul 19 23:28:39 functional-001000 kubelet[7412]: I0719 23:28:39.063864    7412 topology_manager.go:212] "Topology Admit Handler"
	Jul 19 23:28:39 functional-001000 kubelet[7412]: I0719 23:28:39.164138    7412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/43091076-8878-455e-aa52-02e19c3f7536-tmp-volume\") pod \"dashboard-metrics-scraper-5dd9cbfd69-t65x6\" (UID: \"43091076-8878-455e-aa52-02e19c3f7536\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69-t65x6"
	Jul 19 23:28:39 functional-001000 kubelet[7412]: I0719 23:28:39.164188    7412 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l724q\" (UniqueName: \"kubernetes.io/projected/43091076-8878-455e-aa52-02e19c3f7536-kube-api-access-l724q\") pod \"dashboard-metrics-scraper-5dd9cbfd69-t65x6\" (UID: \"43091076-8878-455e-aa52-02e19c3f7536\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69-t65x6"
	Jul 19 23:28:45 functional-001000 kubelet[7412]: I0719 23:28:45.777344    7412 scope.go:115] "RemoveContainer" containerID="8c77e4f6384a83bf9550d2fdfcca3ff545713034933288cff5e3106fc9f61558"
	Jul 19 23:28:45 functional-001000 kubelet[7412]: I0719 23:28:45.785901    7412 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-5c5cfc8747-fhb6l" podStartSLOduration=2.500965068 podCreationTimestamp="2023-07-19 23:28:39 +0000 UTC" firstStartedPulling="2023-07-19 23:28:39.51478632 +0000 UTC m=+66.810222730" lastFinishedPulling="2023-07-19 23:28:43.799693863 +0000 UTC m=+71.095130272" observedRunningTime="2023-07-19 23:28:44.832332891 +0000 UTC m=+72.127769342" watchObservedRunningTime="2023-07-19 23:28:45.78587261 +0000 UTC m=+73.081308978"
	Jul 19 23:28:46 functional-001000 kubelet[7412]: I0719 23:28:46.777245    7412 scope.go:115] "RemoveContainer" containerID="250849b1afced1df2676b8e475a8300b8eda1cffb94303a468f532ebd00ff9ae"
	Jul 19 23:28:46 functional-001000 kubelet[7412]: I0719 23:28:46.901902    7412 scope.go:115] "RemoveContainer" containerID="8c77e4f6384a83bf9550d2fdfcca3ff545713034933288cff5e3106fc9f61558"
	Jul 19 23:28:46 functional-001000 kubelet[7412]: I0719 23:28:46.902066    7412 scope.go:115] "RemoveContainer" containerID="6db9649ad818549b8503c40071e0e6bf323a37876827a4f8471787434f49e25e"
	Jul 19 23:28:46 functional-001000 kubelet[7412]: E0719 23:28:46.902146    7412 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-7b684b55f9-dhknm_default(8f2c0f0b-3cbc-44f3-8b63-461820c2b782)\"" pod="default/hello-node-7b684b55f9-dhknm" podUID=8f2c0f0b-3cbc-44f3-8b63-461820c2b782
	Jul 19 23:28:46 functional-001000 kubelet[7412]: I0719 23:28:46.908897    7412 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-5dd9cbfd69-t65x6" podStartSLOduration=1.767350663 podCreationTimestamp="2023-07-19 23:28:39 +0000 UTC" firstStartedPulling="2023-07-19 23:28:39.582316484 +0000 UTC m=+66.877752894" lastFinishedPulling="2023-07-19 23:28:45.723839351 +0000 UTC m=+73.019275761" observedRunningTime="2023-07-19 23:28:45.895422093 +0000 UTC m=+73.190858503" watchObservedRunningTime="2023-07-19 23:28:46.90887353 +0000 UTC m=+74.204309939"
	Jul 19 23:28:47 functional-001000 kubelet[7412]: I0719 23:28:47.942117    7412 scope.go:115] "RemoveContainer" containerID="250849b1afced1df2676b8e475a8300b8eda1cffb94303a468f532ebd00ff9ae"
	Jul 19 23:28:47 functional-001000 kubelet[7412]: I0719 23:28:47.944818    7412 scope.go:115] "RemoveContainer" containerID="a51669e0da4a39f8394d184d3ab1685e643724f0314cb575f21437cf05fd3749"
	Jul 19 23:28:47 functional-001000 kubelet[7412]: E0719 23:28:47.944903    7412 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-58d66798bb-xlbs9_default(7f191746-b65e-42b3-8776-cff449eb810c)\"" pod="default/hello-node-connect-58d66798bb-xlbs9" podUID=7f191746-b65e-42b3-8776-cff449eb810c
	Jul 19 23:28:48 functional-001000 kubelet[7412]: I0719 23:28:48.958568    7412 scope.go:115] "RemoveContainer" containerID="a51669e0da4a39f8394d184d3ab1685e643724f0314cb575f21437cf05fd3749"
	Jul 19 23:28:48 functional-001000 kubelet[7412]: E0719 23:28:48.958759    7412 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-58d66798bb-xlbs9_default(7f191746-b65e-42b3-8776-cff449eb810c)\"" pod="default/hello-node-connect-58d66798bb-xlbs9" podUID=7f191746-b65e-42b3-8776-cff449eb810c
	Jul 19 23:28:57 functional-001000 kubelet[7412]: I0719 23:28:57.778173    7412 scope.go:115] "RemoveContainer" containerID="6db9649ad818549b8503c40071e0e6bf323a37876827a4f8471787434f49e25e"
	Jul 19 23:28:57 functional-001000 kubelet[7412]: E0719 23:28:57.778877    7412 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-7b684b55f9-dhknm_default(8f2c0f0b-3cbc-44f3-8b63-461820c2b782)\"" pod="default/hello-node-7b684b55f9-dhknm" podUID=8f2c0f0b-3cbc-44f3-8b63-461820c2b782
	
	* 
	* ==> kubernetes-dashboard [ca5c2b98e809] <==
	* 2023/07/19 23:28:44 Using namespace: kubernetes-dashboard
	2023/07/19 23:28:44 Using in-cluster config to connect to apiserver
	2023/07/19 23:28:44 Using secret token for csrf signing
	2023/07/19 23:28:44 Initializing csrf token from kubernetes-dashboard-csrf secret
	2023/07/19 23:28:44 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2023/07/19 23:28:44 Successful initial request to the apiserver, version: v1.27.3
	2023/07/19 23:28:44 Generating JWE encryption key
	2023/07/19 23:28:44 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2023/07/19 23:28:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2023/07/19 23:28:44 Initializing JWE encryption key from synchronized object
	2023/07/19 23:28:44 Creating in-cluster Sidecar client
	2023/07/19 23:28:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/07/19 23:28:44 Serving insecurely on HTTP port: 9090
	2023/07/19 23:28:44 Starting overwatch
	
	* 
	* ==> storage-provisioner [47603d4108c8] <==
	* I0719 23:27:38.416473       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 23:27:38.429155       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 23:27:38.429173       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 23:27:55.821930       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 23:27:55.821996       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-001000_ebdd512c-5508-462d-aab2-b0994a57e48b!
	I0719 23:27:55.822635       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"111b63d5-e13f-4232-a71a-875f7bf09170", APIVersion:"v1", ResourceVersion:"617", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-001000_ebdd512c-5508-462d-aab2-b0994a57e48b became leader
	I0719 23:27:55.922189       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-001000_ebdd512c-5508-462d-aab2-b0994a57e48b!
	I0719 23:28:00.864260       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0719 23:28:00.864283       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    3e23f649-3d5a-47e9-b855-2936214bd7fc 352 0 2023-07-19 23:26:20 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-07-19 23:26:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-89312e85-7274-43c6-920e-b04ab6adcbc1 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  89312e85-7274-43c6-920e-b04ab6adcbc1 625 0 2023-07-19 23:28:00 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-07-19 23:28:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-07-19 23:28:00 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0719 23:28:00.865010       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-89312e85-7274-43c6-920e-b04ab6adcbc1" provisioned
	I0719 23:28:00.865123       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0719 23:28:00.865193       1 volume_store.go:212] Trying to save persistentvolume "pvc-89312e85-7274-43c6-920e-b04ab6adcbc1"
	I0719 23:28:00.866003       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"89312e85-7274-43c6-920e-b04ab6adcbc1", APIVersion:"v1", ResourceVersion:"625", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0719 23:28:00.873169       1 volume_store.go:219] persistentvolume "pvc-89312e85-7274-43c6-920e-b04ab6adcbc1" saved
	I0719 23:28:00.873657       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"89312e85-7274-43c6-920e-b04ab6adcbc1", APIVersion:"v1", ResourceVersion:"625", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-89312e85-7274-43c6-920e-b04ab6adcbc1
	
	* 
	* ==> storage-provisioner [5cb5d36e4184] <==
	* I0719 23:26:49.917276       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 23:26:51.315514       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 23:26:51.316223       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 23:27:08.735441       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 23:27:08.739011       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-001000_59e0b384-088b-440c-b454-3d1fa1441674!
	I0719 23:27:08.742756       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"111b63d5-e13f-4232-a71a-875f7bf09170", APIVersion:"v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-001000_59e0b384-088b-440c-b454-3d1fa1441674 became leader
	I0719 23:27:08.839961       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-001000_59e0b384-088b-440c-b454-3d1fa1441674!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-001000 -n functional-001000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-001000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-001000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-001000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-001000/192.168.105.4
	Start Time:       Wed, 19 Jul 2023 16:28:20 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  docker://06d07c456e6a57b3e2f2ad46777346d601872835ceda09600c2fafd5ca6d9456
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 19 Jul 2023 16:28:23 -0700
	      Finished:     Wed, 19 Jul 2023 16:28:23 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vsdjv (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-vsdjv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  41s   default-scheduler  Successfully assigned default/busybox-mount to functional-001000
	  Normal  Pulling    40s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     38s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.318993452s (2.319054991s including waiting)
	  Normal  Created    38s   kubelet            Created container mount-munger
	  Normal  Started    38s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (39.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-001000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-001000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 80. stderr: I0719 16:28:10.459826    2629 out.go:296] Setting OutFile to fd 1 ...
I0719 16:28:10.460062    2629 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0719 16:28:10.460065    2629 out.go:309] Setting ErrFile to fd 2...
I0719 16:28:10.460067    2629 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0719 16:28:10.460172    2629 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
I0719 16:28:10.460380    2629 mustload.go:65] Loading cluster: functional-001000
I0719 16:28:10.460561    2629 config.go:182] Loaded profile config "functional-001000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0719 16:28:10.465168    2629 out.go:177] 
W0719 16:28:10.469176    2629 out.go:239] X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/functional-001000/monitor: connect: connection refused
X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/functional-001000/monitor: connect: connection refused
W0719 16:28:10.469181    2629 out.go:239] * 
* 
W0719 16:28:10.470727    2629 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                           │
│    * If the above advice does not help, please let us know:                                                               │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
│                                                                                                                           │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
│    * Please also attach the following file to the GitHub issue:                                                           │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_tunnel_7075cb44437691034d825beac909ba5df9688569_0.log    │
│                                                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                           │
│    * If the above advice does not help, please let us know:                                                               │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
│                                                                                                                           │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
│    * Please also attach the following file to the GitHub issue:                                                           │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_tunnel_7075cb44437691034d825beac909ba5df9688569_0.log    │
│                                                                                                                           │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0719 16:28:10.474179    2629 out.go:177] 

                                                
                                                
stdout: 

                                                
                                                
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-001000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-001000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-001000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-001000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2628: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-001000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-001000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.17s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.04s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-052000
image_test.go:105: failed to pass build-args with args: "out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-052000" : 
-- stdout --
	Sending build context to Docker daemon  2.048kB
	Step 1/5 : FROM gcr.io/google-containers/alpine-with-bash:1.0
	 ---> 822c13824dc2
	Step 2/5 : ARG ENV_A
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 367465e05fcd
	Removing intermediate container 367465e05fcd
	 ---> 487b430bc207
	Step 3/5 : ARG ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 35693689b4b6
	Removing intermediate container 35693689b4b6
	 ---> 796266c65629
	Step 4/5 : RUN echo "test-build-arg" $ENV_A $ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 0d31a2686857
	exec /bin/sh: exec format error
	

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            Install the buildx component to build images with BuildKit:
	            https://docs.docker.com/go/buildx/
	
	The command '/bin/sh -c echo "test-build-arg" $ENV_A $ENV_B' returned a non-zero code: 1

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-052000 -n image-052000
helpers_test.go:244: <<< TestImageBuild/serial/BuildWithBuildArg FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestImageBuild/serial/BuildWithBuildArg]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p image-052000 logs -n 25
helpers_test.go:252: TestImageBuild/serial/BuildWithBuildArg logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-001000                     | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT |                     |
	|                | --kill=true                              |                   |         |         |                     |                     |
	| service        | functional-001000 service                | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | hello-node-connect --url                 |                   |         |         |                     |                     |
	| service        | functional-001000 service list           | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	| service        | functional-001000 service list           | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | -o json                                  |                   |         |         |                     |                     |
	| service        | functional-001000 service                | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | --namespace=default --https              |                   |         |         |                     |                     |
	|                | --url hello-node                         |                   |         |         |                     |                     |
	| service        | functional-001000                        | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | service hello-node --url                 |                   |         |         |                     |                     |
	|                | --format={{.IP}}                         |                   |         |         |                     |                     |
	| service        | functional-001000 service                | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | hello-node --url                         |                   |         |         |                     |                     |
	| start          | -p functional-001000                     | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT |                     |
	|                | --dry-run --memory                       |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                  |                   |         |         |                     |                     |
	|                | --driver=qemu2                           |                   |         |         |                     |                     |
	| start          | -p functional-001000                     | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT |                     |
	|                | --dry-run --memory                       |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                  |                   |         |         |                     |                     |
	|                | --driver=qemu2                           |                   |         |         |                     |                     |
	| start          | -p functional-001000 --dry-run           | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT |                     |
	|                | --alsologtostderr -v=1                   |                   |         |         |                     |                     |
	|                | --driver=qemu2                           |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                       | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | -p functional-001000                     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                   |         |         |                     |                     |
	| image          | functional-001000                        | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | image ls --format short                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| image          | functional-001000                        | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | image ls --format yaml                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| ssh            | functional-001000 ssh pgrep              | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT |                     |
	|                | buildkitd                                |                   |         |         |                     |                     |
	| image          | functional-001000 image build -t         | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | localhost/my-image:functional-001000     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                   |         |         |                     |                     |
	| image          | functional-001000 image ls               | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	| image          | functional-001000                        | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | image ls --format json                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| image          | functional-001000                        | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | image ls --format table                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                        |                   |         |         |                     |                     |
	| update-context | functional-001000                        | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | update-context                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                   |         |         |                     |                     |
	| update-context | functional-001000                        | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | update-context                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                   |         |         |                     |                     |
	| update-context | functional-001000                        | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | update-context                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                   |         |         |                     |                     |
	| delete         | -p functional-001000                     | functional-001000 | jenkins | v1.31.0 | 19 Jul 23 16:29 PDT | 19 Jul 23 16:29 PDT |
	| start          | -p image-052000 --driver=qemu2           | image-052000      | jenkins | v1.31.0 | 19 Jul 23 16:29 PDT | 19 Jul 23 16:29 PDT |
	|                |                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-052000      | jenkins | v1.31.0 | 19 Jul 23 16:29 PDT | 19 Jul 23 16:29 PDT |
	|                | ./testdata/image-build/test-normal       |                   |         |         |                     |                     |
	|                | -p image-052000                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-052000      | jenkins | v1.31.0 | 19 Jul 23 16:29 PDT | 19 Jul 23 16:29 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                   |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                   |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                   |         |         |                     |                     |
	|                | image-052000                             |                   |         |         |                     |                     |
	|----------------|------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/19 16:29:02
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 16:29:02.298322    2830 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:29:02.298481    2830 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:29:02.298482    2830 out.go:309] Setting ErrFile to fd 2...
	I0719 16:29:02.298485    2830 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:29:02.298588    2830 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:29:02.299587    2830 out.go:303] Setting JSON to false
	I0719 16:29:02.315254    2830 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3513,"bootTime":1689805829,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:29:02.315324    2830 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:29:02.318844    2830 out.go:177] * [image-052000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:29:02.325846    2830 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:29:02.329842    2830 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:29:02.325884    2830 notify.go:220] Checking for updates...
	I0719 16:29:02.332822    2830 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:29:02.335870    2830 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:29:02.338814    2830 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:29:02.341872    2830 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:29:02.345010    2830 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:29:02.347710    2830 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 16:29:02.353823    2830 start.go:298] selected driver: qemu2
	I0719 16:29:02.353825    2830 start.go:880] validating driver "qemu2" against <nil>
	I0719 16:29:02.353831    2830 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:29:02.353901    2830 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0719 16:29:02.354883    2830 out.go:177] * Automatically selected the socket_vmnet network
	I0719 16:29:02.359713    2830 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0719 16:29:02.359796    2830 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 16:29:02.359808    2830 cni.go:84] Creating CNI manager for ""
	I0719 16:29:02.359813    2830 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 16:29:02.359817    2830 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 16:29:02.359823    2830 start_flags.go:319] config:
	{Name:image-052000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:image-052000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni
FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:29:02.363810    2830 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:29:02.370847    2830 out.go:177] * Starting control plane node image-052000 in cluster image-052000
	I0719 16:29:02.374850    2830 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 16:29:02.374875    2830 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 16:29:02.374885    2830 cache.go:57] Caching tarball of preloaded images
	I0719 16:29:02.374958    2830 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 16:29:02.374962    2830 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0719 16:29:02.375155    2830 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/image-052000/config.json ...
	I0719 16:29:02.375165    2830 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/image-052000/config.json: {Name:mk4e0662541d6a27958940817ae9d7f32e589ef0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:29:02.375372    2830 start.go:365] acquiring machines lock for image-052000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:29:02.375399    2830 start.go:369] acquired machines lock for "image-052000" in 23.333µs
	I0719 16:29:02.375407    2830 start.go:93] Provisioning new machine with config: &{Name:image-052000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:image-052000 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:29:02.375432    2830 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:29:02.382835    2830 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0719 16:29:02.403878    2830 start.go:159] libmachine.API.Create for "image-052000" (driver="qemu2")
	I0719 16:29:02.403902    2830 client.go:168] LocalClient.Create starting
	I0719 16:29:02.403961    2830 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:29:02.403983    2830 main.go:141] libmachine: Decoding PEM data...
	I0719 16:29:02.403990    2830 main.go:141] libmachine: Parsing certificate...
	I0719 16:29:02.404035    2830 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:29:02.404048    2830 main.go:141] libmachine: Decoding PEM data...
	I0719 16:29:02.404054    2830 main.go:141] libmachine: Parsing certificate...
	I0719 16:29:02.404351    2830 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:29:02.661441    2830 main.go:141] libmachine: Creating SSH key...
	I0719 16:29:02.763141    2830 main.go:141] libmachine: Creating Disk image...
	I0719 16:29:02.763146    2830 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:29:02.763283    2830 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/image-052000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/image-052000/disk.qcow2
	I0719 16:29:02.785545    2830 main.go:141] libmachine: STDOUT: 
	I0719 16:29:02.785559    2830 main.go:141] libmachine: STDERR: 
	I0719 16:29:02.785617    2830 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/image-052000/disk.qcow2 +20000M
	I0719 16:29:02.792875    2830 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:29:02.792884    2830 main.go:141] libmachine: STDERR: 
	I0719 16:29:02.792895    2830 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/image-052000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/image-052000/disk.qcow2
	I0719 16:29:02.792907    2830 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:29:02.792952    2830 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/image-052000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/image-052000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/image-052000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:de:1f:e5:ae:4e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/image-052000/disk.qcow2
	I0719 16:29:02.827337    2830 main.go:141] libmachine: STDOUT: 
	I0719 16:29:02.827365    2830 main.go:141] libmachine: STDERR: 
	I0719 16:29:02.827368    2830 main.go:141] libmachine: Attempt 0
	I0719 16:29:02.827390    2830 main.go:141] libmachine: Searching for 6a:de:1f:e5:ae:4e in /var/db/dhcpd_leases ...
	I0719 16:29:02.827463    2830 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0719 16:29:02.827479    2830 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:12:80:ec:16:f5:bf ID:1,12:80:ec:16:f5:bf Lease:0x64b9c27d}
	I0719 16:29:02.827490    2830 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:fa:a0:fd:4f:11:83 ID:1,fa:a0:fd:4f:11:83 Lease:0x64b870f1}
	I0719 16:29:02.827494    2830 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:3a:2:96:5:da ID:1,36:3a:2:96:5:da Lease:0x64b870c5}
	I0719 16:29:04.829635    2830 main.go:141] libmachine: Attempt 1
	I0719 16:29:04.829680    2830 main.go:141] libmachine: Searching for 6a:de:1f:e5:ae:4e in /var/db/dhcpd_leases ...
	I0719 16:29:04.830150    2830 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0719 16:29:04.830194    2830 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:12:80:ec:16:f5:bf ID:1,12:80:ec:16:f5:bf Lease:0x64b9c27d}
	I0719 16:29:04.830222    2830 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:fa:a0:fd:4f:11:83 ID:1,fa:a0:fd:4f:11:83 Lease:0x64b870f1}
	I0719 16:29:04.830280    2830 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:3a:2:96:5:da ID:1,36:3a:2:96:5:da Lease:0x64b870c5}
	I0719 16:29:06.832387    2830 main.go:141] libmachine: Attempt 2
	I0719 16:29:06.832402    2830 main.go:141] libmachine: Searching for 6a:de:1f:e5:ae:4e in /var/db/dhcpd_leases ...
	I0719 16:29:06.832535    2830 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0719 16:29:06.832546    2830 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:12:80:ec:16:f5:bf ID:1,12:80:ec:16:f5:bf Lease:0x64b9c27d}
	I0719 16:29:06.832551    2830 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:fa:a0:fd:4f:11:83 ID:1,fa:a0:fd:4f:11:83 Lease:0x64b870f1}
	I0719 16:29:06.832555    2830 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:3a:2:96:5:da ID:1,36:3a:2:96:5:da Lease:0x64b870c5}
	I0719 16:29:08.834550    2830 main.go:141] libmachine: Attempt 3
	I0719 16:29:08.834555    2830 main.go:141] libmachine: Searching for 6a:de:1f:e5:ae:4e in /var/db/dhcpd_leases ...
	I0719 16:29:08.834589    2830 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0719 16:29:08.834602    2830 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:12:80:ec:16:f5:bf ID:1,12:80:ec:16:f5:bf Lease:0x64b9c27d}
	I0719 16:29:08.834606    2830 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:fa:a0:fd:4f:11:83 ID:1,fa:a0:fd:4f:11:83 Lease:0x64b870f1}
	I0719 16:29:08.834610    2830 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:3a:2:96:5:da ID:1,36:3a:2:96:5:da Lease:0x64b870c5}
	I0719 16:29:10.836618    2830 main.go:141] libmachine: Attempt 4
	I0719 16:29:10.836628    2830 main.go:141] libmachine: Searching for 6a:de:1f:e5:ae:4e in /var/db/dhcpd_leases ...
	I0719 16:29:10.836736    2830 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0719 16:29:10.836744    2830 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:12:80:ec:16:f5:bf ID:1,12:80:ec:16:f5:bf Lease:0x64b9c27d}
	I0719 16:29:10.836748    2830 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:fa:a0:fd:4f:11:83 ID:1,fa:a0:fd:4f:11:83 Lease:0x64b870f1}
	I0719 16:29:10.836752    2830 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:3a:2:96:5:da ID:1,36:3a:2:96:5:da Lease:0x64b870c5}
	I0719 16:29:12.838851    2830 main.go:141] libmachine: Attempt 5
	I0719 16:29:12.838885    2830 main.go:141] libmachine: Searching for 6a:de:1f:e5:ae:4e in /var/db/dhcpd_leases ...
	I0719 16:29:12.838980    2830 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0719 16:29:12.838988    2830 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:12:80:ec:16:f5:bf ID:1,12:80:ec:16:f5:bf Lease:0x64b9c27d}
	I0719 16:29:12.838994    2830 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:fa:a0:fd:4f:11:83 ID:1,fa:a0:fd:4f:11:83 Lease:0x64b870f1}
	I0719 16:29:12.838998    2830 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:3a:2:96:5:da ID:1,36:3a:2:96:5:da Lease:0x64b870c5}
	I0719 16:29:14.841121    2830 main.go:141] libmachine: Attempt 6
	I0719 16:29:14.841144    2830 main.go:141] libmachine: Searching for 6a:de:1f:e5:ae:4e in /var/db/dhcpd_leases ...
	I0719 16:29:14.841339    2830 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0719 16:29:14.841360    2830 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:de:1f:e5:ae:4e ID:1,6a:de:1f:e5:ae:4e Lease:0x64b9c349}
	I0719 16:29:14.841367    2830 main.go:141] libmachine: Found match: 6a:de:1f:e5:ae:4e
	I0719 16:29:14.841384    2830 main.go:141] libmachine: IP: 192.168.105.5
	I0719 16:29:14.841394    2830 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0719 16:29:15.848761    2830 machine.go:88] provisioning docker machine ...
	I0719 16:29:15.848777    2830 buildroot.go:166] provisioning hostname "image-052000"
	I0719 16:29:15.848837    2830 main.go:141] libmachine: Using SSH client type: native
	I0719 16:29:15.849110    2830 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10129d170] 0x10129fbd0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0719 16:29:15.849114    2830 main.go:141] libmachine: About to run SSH command:
	sudo hostname image-052000 && echo "image-052000" | sudo tee /etc/hostname
	I0719 16:29:15.905430    2830 main.go:141] libmachine: SSH cmd err, output: <nil>: image-052000
	
	I0719 16:29:15.905494    2830 main.go:141] libmachine: Using SSH client type: native
	I0719 16:29:15.905757    2830 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10129d170] 0x10129fbd0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0719 16:29:15.905765    2830 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\simage-052000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 image-052000/g' /etc/hosts;
				else 
					echo '127.0.1.1 image-052000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 16:29:15.963058    2830 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 16:29:15.963068    2830 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15585-1056/.minikube CaCertPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15585-1056/.minikube}
	I0719 16:29:15.963078    2830 buildroot.go:174] setting up certificates
	I0719 16:29:15.963083    2830 provision.go:83] configureAuth start
	I0719 16:29:15.963086    2830 provision.go:138] copyHostCerts
	I0719 16:29:15.963173    2830 exec_runner.go:144] found /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.pem, removing ...
	I0719 16:29:15.963176    2830 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.pem
	I0719 16:29:15.963279    2830 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.pem (1082 bytes)
	I0719 16:29:15.963448    2830 exec_runner.go:144] found /Users/jenkins/minikube-integration/15585-1056/.minikube/cert.pem, removing ...
	I0719 16:29:15.963450    2830 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15585-1056/.minikube/cert.pem
	I0719 16:29:15.963483    2830 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15585-1056/.minikube/cert.pem (1123 bytes)
	I0719 16:29:15.963573    2830 exec_runner.go:144] found /Users/jenkins/minikube-integration/15585-1056/.minikube/key.pem, removing ...
	I0719 16:29:15.963574    2830 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15585-1056/.minikube/key.pem
	I0719 16:29:15.963604    2830 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15585-1056/.minikube/key.pem (1675 bytes)
	I0719 16:29:15.963669    2830 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca-key.pem org=jenkins.image-052000 san=[192.168.105.5 192.168.105.5 localhost 127.0.0.1 minikube image-052000]
	I0719 16:29:16.009459    2830 provision.go:172] copyRemoteCerts
	I0719 16:29:16.009512    2830 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 16:29:16.009517    2830 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/image-052000/id_rsa Username:docker}
	I0719 16:29:16.041514    2830 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 16:29:16.048635    2830 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 16:29:16.055813    2830 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0719 16:29:16.062616    2830 provision.go:86] duration metric: configureAuth took 99.531292ms
	I0719 16:29:16.062620    2830 buildroot.go:189] setting minikube options for container-runtime
	I0719 16:29:16.062714    2830 config.go:182] Loaded profile config "image-052000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:29:16.062745    2830 main.go:141] libmachine: Using SSH client type: native
	I0719 16:29:16.062973    2830 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10129d170] 0x10129fbd0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0719 16:29:16.062976    2830 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 16:29:16.120970    2830 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 16:29:16.120973    2830 buildroot.go:70] root file system type: tmpfs
	I0719 16:29:16.121044    2830 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 16:29:16.121101    2830 main.go:141] libmachine: Using SSH client type: native
	I0719 16:29:16.121351    2830 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10129d170] 0x10129fbd0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0719 16:29:16.121385    2830 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 16:29:16.181042    2830 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 16:29:16.181081    2830 main.go:141] libmachine: Using SSH client type: native
	I0719 16:29:16.181299    2830 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10129d170] 0x10129fbd0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0719 16:29:16.181306    2830 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 16:29:16.520107    2830 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 16:29:16.520114    2830 machine.go:91] provisioned docker machine in 671.359167ms
	I0719 16:29:16.520119    2830 client.go:171] LocalClient.Create took 14.116466542s
	I0719 16:29:16.520139    2830 start.go:167] duration metric: libmachine.API.Create for "image-052000" took 14.11651625s
	I0719 16:29:16.520142    2830 start.go:300] post-start starting for "image-052000" (driver="qemu2")
	I0719 16:29:16.520146    2830 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 16:29:16.520220    2830 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 16:29:16.520228    2830 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/image-052000/id_rsa Username:docker}
	I0719 16:29:16.549209    2830 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 16:29:16.550681    2830 info.go:137] Remote host: Buildroot 2021.02.12
	I0719 16:29:16.550685    2830 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15585-1056/.minikube/addons for local assets ...
	I0719 16:29:16.550750    2830 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15585-1056/.minikube/files for local assets ...
	I0719 16:29:16.550868    2830 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15585-1056/.minikube/files/etc/ssl/certs/14702.pem -> 14702.pem in /etc/ssl/certs
	I0719 16:29:16.550981    2830 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 16:29:16.553397    2830 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/files/etc/ssl/certs/14702.pem --> /etc/ssl/certs/14702.pem (1708 bytes)
	I0719 16:29:16.560799    2830 start.go:303] post-start completed in 40.654666ms
	I0719 16:29:16.561179    2830 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/image-052000/config.json ...
	I0719 16:29:16.561330    2830 start.go:128] duration metric: createHost completed in 14.186148041s
	I0719 16:29:16.561361    2830 main.go:141] libmachine: Using SSH client type: native
	I0719 16:29:16.561576    2830 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10129d170] 0x10129fbd0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0719 16:29:16.561579    2830 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 16:29:16.617595    2830 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689809356.377777335
	
	I0719 16:29:16.617598    2830 fix.go:206] guest clock: 1689809356.377777335
	I0719 16:29:16.617602    2830 fix.go:219] Guest: 2023-07-19 16:29:16.377777335 -0700 PDT Remote: 2023-07-19 16:29:16.561338 -0700 PDT m=+14.282002584 (delta=-183.560665ms)
	I0719 16:29:16.617610    2830 fix.go:190] guest clock delta is within tolerance: -183.560665ms
	I0719 16:29:16.617612    2830 start.go:83] releasing machines lock for "image-052000", held for 14.242464042s
	I0719 16:29:16.617899    2830 ssh_runner.go:195] Run: cat /version.json
	I0719 16:29:16.617905    2830 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/image-052000/id_rsa Username:docker}
	I0719 16:29:16.617917    2830 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 16:29:16.617934    2830 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/image-052000/id_rsa Username:docker}
	I0719 16:29:16.686574    2830 ssh_runner.go:195] Run: systemctl --version
	I0719 16:29:16.688528    2830 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 16:29:16.690231    2830 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 16:29:16.690260    2830 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0719 16:29:16.695092    2830 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 16:29:16.695097    2830 start.go:466] detecting cgroup driver to use...
	I0719 16:29:16.695197    2830 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 16:29:16.700680    2830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0719 16:29:16.703818    2830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 16:29:16.707305    2830 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 16:29:16.707343    2830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 16:29:16.710303    2830 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 16:29:16.713409    2830 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 16:29:16.716876    2830 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 16:29:16.720650    2830 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 16:29:16.724413    2830 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 16:29:16.728018    2830 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 16:29:16.730846    2830 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 16:29:16.733527    2830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 16:29:16.809477    2830 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 16:29:16.818576    2830 start.go:466] detecting cgroup driver to use...
	I0719 16:29:16.818643    2830 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 16:29:16.824873    2830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 16:29:16.832519    2830 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 16:29:16.840895    2830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 16:29:16.846081    2830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 16:29:16.850544    2830 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 16:29:16.883728    2830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 16:29:16.888414    2830 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 16:29:16.893828    2830 ssh_runner.go:195] Run: which cri-dockerd
	I0719 16:29:16.895179    2830 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 16:29:16.897644    2830 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 16:29:16.902827    2830 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 16:29:16.983199    2830 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 16:29:17.052895    2830 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 16:29:17.052921    2830 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0719 16:29:17.058387    2830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 16:29:17.133361    2830 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 16:29:18.301715    2830 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.168361875s)
	I0719 16:29:18.301774    2830 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 16:29:18.390656    2830 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0719 16:29:18.467012    2830 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0719 16:29:18.542815    2830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 16:29:18.617810    2830 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0719 16:29:18.625747    2830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 16:29:18.707039    2830 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0719 16:29:18.731736    2830 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0719 16:29:18.731817    2830 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0719 16:29:18.733813    2830 start.go:534] Will wait 60s for crictl version
	I0719 16:29:18.733855    2830 ssh_runner.go:195] Run: which crictl
	I0719 16:29:18.735228    2830 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0719 16:29:18.750042    2830 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.4
	RuntimeApiVersion:  v1alpha2
	I0719 16:29:18.750117    2830 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 16:29:18.761988    2830 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 16:29:18.777470    2830 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.4 ...
	I0719 16:29:18.777612    2830 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0719 16:29:18.779020    2830 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 16:29:18.782609    2830 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 16:29:18.782651    2830 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 16:29:18.788012    2830 docker.go:636] Got preloaded images: 
	I0719 16:29:18.788015    2830 docker.go:642] registry.k8s.io/kube-apiserver:v1.27.3 wasn't preloaded
	I0719 16:29:18.788051    2830 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 16:29:18.790725    2830 ssh_runner.go:195] Run: which lz4
	I0719 16:29:18.792006    2830 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 16:29:18.793297    2830 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 16:29:18.793307    2830 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635922 bytes)
	I0719 16:29:20.077067    2830 docker.go:600] Took 1.285130 seconds to copy over tarball
	I0719 16:29:20.077120    2830 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 16:29:21.092863    2830 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.0157495s)
	I0719 16:29:21.092874    2830 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 16:29:21.108804    2830 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 16:29:21.112769    2830 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0719 16:29:21.118112    2830 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 16:29:21.201286    2830 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 16:29:22.670788    2830 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.469516209s)
	I0719 16:29:22.670865    2830 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 16:29:22.676879    2830 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0719 16:29:22.676887    2830 cache_images.go:84] Images are preloaded, skipping loading
	I0719 16:29:22.676948    2830 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0719 16:29:22.684863    2830 cni.go:84] Creating CNI manager for ""
	I0719 16:29:22.684869    2830 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 16:29:22.684882    2830 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0719 16:29:22.684889    2830 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:image-052000 NodeName:image-052000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0719 16:29:22.684978    2830 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "image-052000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 16:29:22.685016    2830 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=image-052000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:image-052000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0719 16:29:22.685085    2830 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0719 16:29:22.688623    2830 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 16:29:22.688648    2830 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 16:29:22.692116    2830 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0719 16:29:22.697362    2830 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0719 16:29:22.702554    2830 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0719 16:29:22.707533    2830 ssh_runner.go:195] Run: grep 192.168.105.5	control-plane.minikube.internal$ /etc/hosts
	I0719 16:29:22.708810    2830 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 16:29:22.713087    2830 certs.go:56] Setting up /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/image-052000 for IP: 192.168.105.5
	I0719 16:29:22.713093    2830 certs.go:190] acquiring lock for shared ca certs: {Name:mk57268b94adc82cb06ba056d8f0acecf538b87f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:29:22.713227    2830 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.key
	I0719 16:29:22.713263    2830 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.key
	I0719 16:29:22.713288    2830 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/image-052000/client.key
	I0719 16:29:22.713294    2830 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/image-052000/client.crt with IP's: []
	I0719 16:29:22.826141    2830 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/image-052000/client.crt ...
	I0719 16:29:22.826145    2830 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/image-052000/client.crt: {Name:mkc25406169c6e2dc4acc186d2bc1b67baa40006 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:29:22.826348    2830 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/image-052000/client.key ...
	I0719 16:29:22.826349    2830 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/image-052000/client.key: {Name:mk763fd363a5f0bcf44294c5395a511fd70a7362 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:29:22.826454    2830 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/image-052000/apiserver.key.e69b33ca
	I0719 16:29:22.826459    2830 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/image-052000/apiserver.crt.e69b33ca with IP's: [192.168.105.5 10.96.0.1 127.0.0.1 10.0.0.1]
	I0719 16:29:22.909448    2830 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/image-052000/apiserver.crt.e69b33ca ...
	I0719 16:29:22.909450    2830 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/image-052000/apiserver.crt.e69b33ca: {Name:mk118ecd399629c3545cff04c3cf29847afe47a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:29:22.909570    2830 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/image-052000/apiserver.key.e69b33ca ...
	I0719 16:29:22.909572    2830 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/image-052000/apiserver.key.e69b33ca: {Name:mk3fac5adcf2fd7209c3473b5dfcdadaa9c85ae0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:29:22.909667    2830 certs.go:337] copying /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/image-052000/apiserver.crt.e69b33ca -> /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/image-052000/apiserver.crt
	I0719 16:29:22.909857    2830 certs.go:341] copying /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/image-052000/apiserver.key.e69b33ca -> /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/image-052000/apiserver.key
	I0719 16:29:22.909984    2830 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/image-052000/proxy-client.key
	I0719 16:29:22.909989    2830 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/image-052000/proxy-client.crt with IP's: []
	I0719 16:29:23.246200    2830 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/image-052000/proxy-client.crt ...
	I0719 16:29:23.246207    2830 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/image-052000/proxy-client.crt: {Name:mk15f2c4e390b914681dcb17ea0881224d7cb61d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:29:23.246469    2830 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/image-052000/proxy-client.key ...
	I0719 16:29:23.246471    2830 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/image-052000/proxy-client.key: {Name:mk268a362349e5c2ac4be38086bb6f9ccfcab7d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:29:23.246727    2830 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/1470.pem (1338 bytes)
	W0719 16:29:23.246762    2830 certs.go:433] ignoring /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/1470_empty.pem, impossibly tiny 0 bytes
	I0719 16:29:23.246767    2830 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 16:29:23.246790    2830 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem (1082 bytes)
	I0719 16:29:23.246807    2830 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem (1123 bytes)
	I0719 16:29:23.246822    2830 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/key.pem (1675 bytes)
	I0719 16:29:23.246858    2830 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/files/etc/ssl/certs/14702.pem (1708 bytes)
	I0719 16:29:23.247137    2830 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/image-052000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0719 16:29:23.255135    2830 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/image-052000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 16:29:23.262338    2830 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/image-052000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 16:29:23.269300    2830 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/image-052000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0719 16:29:23.276726    2830 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 16:29:23.284268    2830 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 16:29:23.291897    2830 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 16:29:23.299154    2830 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 16:29:23.306119    2830 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/1470.pem --> /usr/share/ca-certificates/1470.pem (1338 bytes)
	I0719 16:29:23.312944    2830 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/files/etc/ssl/certs/14702.pem --> /usr/share/ca-certificates/14702.pem (1708 bytes)
	I0719 16:29:23.320151    2830 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 16:29:23.327019    2830 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 16:29:23.332075    2830 ssh_runner.go:195] Run: openssl version
	I0719 16:29:23.334074    2830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1470.pem && ln -fs /usr/share/ca-certificates/1470.pem /etc/ssl/certs/1470.pem"
	I0719 16:29:23.337377    2830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1470.pem
	I0719 16:29:23.338827    2830 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 19 23:25 /usr/share/ca-certificates/1470.pem
	I0719 16:29:23.338847    2830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1470.pem
	I0719 16:29:23.340675    2830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1470.pem /etc/ssl/certs/51391683.0"
	I0719 16:29:23.344134    2830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14702.pem && ln -fs /usr/share/ca-certificates/14702.pem /etc/ssl/certs/14702.pem"
	I0719 16:29:23.347178    2830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14702.pem
	I0719 16:29:23.348622    2830 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 19 23:25 /usr/share/ca-certificates/14702.pem
	I0719 16:29:23.348637    2830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14702.pem
	I0719 16:29:23.350516    2830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14702.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 16:29:23.353568    2830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 16:29:23.356851    2830 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 16:29:23.358210    2830 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 19 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0719 16:29:23.358229    2830 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 16:29:23.359969    2830 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 16:29:23.362956    2830 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0719 16:29:23.364301    2830 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0719 16:29:23.364329    2830 kubeadm.go:404] StartCluster: {Name:image-052000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:image-052000 Namespace:default APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwa
rePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:29:23.364390    2830 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 16:29:23.370640    2830 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 16:29:23.373537    2830 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 16:29:23.376767    2830 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 16:29:23.379888    2830 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 16:29:23.379899    2830 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0719 16:29:23.403293    2830 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0719 16:29:23.403316    2830 kubeadm.go:322] [preflight] Running pre-flight checks
	I0719 16:29:23.456998    2830 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 16:29:23.457054    2830 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 16:29:23.457123    2830 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 16:29:23.516147    2830 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 16:29:23.520425    2830 out.go:204]   - Generating certificates and keys ...
	I0719 16:29:23.520480    2830 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0719 16:29:23.520509    2830 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0719 16:29:23.654472    2830 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 16:29:23.786317    2830 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0719 16:29:23.935217    2830 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0719 16:29:24.048869    2830 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0719 16:29:24.125006    2830 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0719 16:29:24.125071    2830 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [image-052000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0719 16:29:24.218191    2830 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0719 16:29:24.218244    2830 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [image-052000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0719 16:29:24.297193    2830 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 16:29:24.374066    2830 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 16:29:24.479184    2830 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0719 16:29:24.479220    2830 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 16:29:24.556010    2830 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 16:29:24.682179    2830 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 16:29:24.721477    2830 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 16:29:24.825896    2830 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 16:29:24.832685    2830 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 16:29:24.833035    2830 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 16:29:24.833063    2830 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0719 16:29:24.916111    2830 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 16:29:24.924311    2830 out.go:204]   - Booting up control plane ...
	I0719 16:29:24.924368    2830 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 16:29:24.924410    2830 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 16:29:24.924464    2830 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 16:29:24.924505    2830 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 16:29:24.924590    2830 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 16:29:28.924039    2830 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.004069 seconds
	I0719 16:29:28.924098    2830 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 16:29:28.928585    2830 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 16:29:29.441788    2830 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 16:29:29.441959    2830 kubeadm.go:322] [mark-control-plane] Marking the node image-052000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0719 16:29:29.948883    2830 kubeadm.go:322] [bootstrap-token] Using token: cegkel.3kyye3ptn56jdkfy
	I0719 16:29:29.953193    2830 out.go:204]   - Configuring RBAC rules ...
	I0719 16:29:29.953277    2830 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 16:29:29.954495    2830 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 16:29:29.958635    2830 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 16:29:29.960248    2830 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 16:29:29.961675    2830 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 16:29:29.963035    2830 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 16:29:29.967587    2830 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 16:29:30.119660    2830 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0719 16:29:30.358379    2830 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0719 16:29:30.359326    2830 kubeadm.go:322] 
	I0719 16:29:30.359363    2830 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0719 16:29:30.359365    2830 kubeadm.go:322] 
	I0719 16:29:30.359418    2830 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0719 16:29:30.359422    2830 kubeadm.go:322] 
	I0719 16:29:30.359436    2830 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0719 16:29:30.359474    2830 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 16:29:30.359516    2830 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 16:29:30.359520    2830 kubeadm.go:322] 
	I0719 16:29:30.359549    2830 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0719 16:29:30.359551    2830 kubeadm.go:322] 
	I0719 16:29:30.359586    2830 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0719 16:29:30.359588    2830 kubeadm.go:322] 
	I0719 16:29:30.359615    2830 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0719 16:29:30.359679    2830 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 16:29:30.359715    2830 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 16:29:30.359717    2830 kubeadm.go:322] 
	I0719 16:29:30.359775    2830 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 16:29:30.359822    2830 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0719 16:29:30.359823    2830 kubeadm.go:322] 
	I0719 16:29:30.359890    2830 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token cegkel.3kyye3ptn56jdkfy \
	I0719 16:29:30.359960    2830 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ac94190d213987345f5634b8d3500a45c06dcf10dacd514b79727b35e18f9f90 \
	I0719 16:29:30.359985    2830 kubeadm.go:322] 	--control-plane 
	I0719 16:29:30.359989    2830 kubeadm.go:322] 
	I0719 16:29:30.360039    2830 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0719 16:29:30.360044    2830 kubeadm.go:322] 
	I0719 16:29:30.360099    2830 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token cegkel.3kyye3ptn56jdkfy \
	I0719 16:29:30.360159    2830 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:ac94190d213987345f5634b8d3500a45c06dcf10dacd514b79727b35e18f9f90 
	I0719 16:29:30.360291    2830 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 16:29:30.360298    2830 cni.go:84] Creating CNI manager for ""
	I0719 16:29:30.360305    2830 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 16:29:30.368232    2830 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0719 16:29:30.372443    2830 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0719 16:29:30.375435    2830 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0719 16:29:30.380256    2830 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 16:29:30.380328    2830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=fd270bde4b7e946c995f2329996587ae45fe53d4 minikube.k8s.io/name=image-052000 minikube.k8s.io/updated_at=2023_07_19T16_29_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:29:30.380330    2830 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:29:30.441998    2830 ops.go:34] apiserver oom_adj: -16
	I0719 16:29:30.442007    2830 kubeadm.go:1081] duration metric: took 61.701542ms to wait for elevateKubeSystemPrivileges.
	I0719 16:29:30.442014    2830 kubeadm.go:406] StartCluster complete in 7.077812166s
	I0719 16:29:30.442022    2830 settings.go:142] acquiring lock: {Name:mk58631521ffd49c3231a31589bcae3549c3b53c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:29:30.442103    2830 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:29:30.442414    2830 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/kubeconfig: {Name:mk508b0ad49e7803e8fd5dcb96b45d1248a097b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:29:30.442601    2830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0719 16:29:30.442616    2830 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0719 16:29:30.442653    2830 addons.go:69] Setting default-storageclass=true in profile "image-052000"
	I0719 16:29:30.442655    2830 addons.go:69] Setting storage-provisioner=true in profile "image-052000"
	I0719 16:29:30.442658    2830 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "image-052000"
	I0719 16:29:30.442660    2830 addons.go:231] Setting addon storage-provisioner=true in "image-052000"
	I0719 16:29:30.442682    2830 host.go:66] Checking if "image-052000" exists ...
	I0719 16:29:30.442700    2830 config.go:182] Loaded profile config "image-052000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:29:30.448311    2830 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 16:29:30.452187    2830 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 16:29:30.452191    2830 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 16:29:30.452199    2830 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/image-052000/id_rsa Username:docker}
	I0719 16:29:30.456648    2830 addons.go:231] Setting addon default-storageclass=true in "image-052000"
	I0719 16:29:30.456662    2830 host.go:66] Checking if "image-052000" exists ...
	I0719 16:29:30.457298    2830 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 16:29:30.457302    2830 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 16:29:30.457306    2830 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/image-052000/id_rsa Username:docker}
	I0719 16:29:30.484283    2830 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0719 16:29:30.491051    2830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 16:29:30.549860    2830 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 16:29:30.927861    2830 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0719 16:29:30.963362    2830 kapi.go:248] "coredns" deployment in "kube-system" namespace and "image-052000" context rescaled to 1 replicas
	I0719 16:29:30.963375    2830 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:29:30.973826    2830 out.go:177] * Verifying Kubernetes components...
	I0719 16:29:30.977842    2830 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 16:29:31.018742    2830 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0719 16:29:31.015309    2830 api_server.go:52] waiting for apiserver process to appear ...
	I0719 16:29:31.028757    2830 addons.go:502] enable addons completed in 586.14575ms: enabled=[storage-provisioner default-storageclass]
	I0719 16:29:31.028790    2830 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 16:29:31.032931    2830 api_server.go:72] duration metric: took 69.546875ms to wait for apiserver process to appear ...
	I0719 16:29:31.032933    2830 api_server.go:88] waiting for apiserver healthz status ...
	I0719 16:29:31.032943    2830 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0719 16:29:31.036218    2830 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0719 16:29:31.036786    2830 api_server.go:141] control plane version: v1.27.3
	I0719 16:29:31.036789    2830 api_server.go:131] duration metric: took 3.854333ms to wait for apiserver health ...
	I0719 16:29:31.036791    2830 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 16:29:31.039346    2830 system_pods.go:59] 5 kube-system pods found
	I0719 16:29:31.039350    2830 system_pods.go:61] "etcd-image-052000" [532b6c9a-c51d-473e-a846-95e08919e3d0] Pending
	I0719 16:29:31.039352    2830 system_pods.go:61] "kube-apiserver-image-052000" [7d978990-c3d6-4c1a-85f0-41988331dfab] Pending
	I0719 16:29:31.039354    2830 system_pods.go:61] "kube-controller-manager-image-052000" [8165cd0f-4b74-46f3-9743-aee91e9a2257] Pending
	I0719 16:29:31.039356    2830 system_pods.go:61] "kube-scheduler-image-052000" [837e4a51-db20-4f94-94e9-d5d9bbee9ed1] Pending
	I0719 16:29:31.039358    2830 system_pods.go:61] "storage-provisioner" [d0ce85de-b140-4c93-9423-a7eca05ef60f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I0719 16:29:31.039361    2830 system_pods.go:74] duration metric: took 2.567208ms to wait for pod list to return data ...
	I0719 16:29:31.039363    2830 kubeadm.go:581] duration metric: took 75.980459ms to wait for : map[apiserver:true system_pods:true] ...
	I0719 16:29:31.039368    2830 node_conditions.go:102] verifying NodePressure condition ...
	I0719 16:29:31.040557    2830 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0719 16:29:31.040564    2830 node_conditions.go:123] node cpu capacity is 2
	I0719 16:29:31.040569    2830 node_conditions.go:105] duration metric: took 1.199125ms to run NodePressure ...
	I0719 16:29:31.040573    2830 start.go:228] waiting for startup goroutines ...
	I0719 16:29:31.040575    2830 start.go:233] waiting for cluster config update ...
	I0719 16:29:31.040579    2830 start.go:242] writing updated cluster config ...
	I0719 16:29:31.040849    2830 ssh_runner.go:195] Run: rm -f paused
	I0719 16:29:31.069431    2830 start.go:578] kubectl: 1.27.2, cluster: 1.27.3 (minor skew: 0)
	I0719 16:29:31.073854    2830 out.go:177] * Done! kubectl is now configured to use "image-052000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-07-19 23:29:13 UTC, ends at Wed 2023-07-19 23:29:33 UTC. --
	Jul 19 23:29:25 image-052000 cri-dockerd[996]: time="2023-07-19T23:29:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/19479e96b2aeec8c1dc889369a09902d1b786a4adb8b202e93a61835db28d369/resolv.conf as [nameserver 192.168.105.1]"
	Jul 19 23:29:25 image-052000 dockerd[1102]: time="2023-07-19T23:29:25.636756631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 23:29:25 image-052000 dockerd[1102]: time="2023-07-19T23:29:25.636785839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 23:29:25 image-052000 dockerd[1102]: time="2023-07-19T23:29:25.636791964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 23:29:25 image-052000 dockerd[1102]: time="2023-07-19T23:29:25.636796256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 23:29:25 image-052000 cri-dockerd[996]: time="2023-07-19T23:29:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/baaa3687ddc6cafb8381514977c5c6c6d69b06edef1a5e78529dc0e944ca9bc8/resolv.conf as [nameserver 192.168.105.1]"
	Jul 19 23:29:25 image-052000 dockerd[1102]: time="2023-07-19T23:29:25.666761631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 23:29:25 image-052000 dockerd[1102]: time="2023-07-19T23:29:25.666799797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 23:29:25 image-052000 dockerd[1102]: time="2023-07-19T23:29:25.666810839Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 23:29:25 image-052000 dockerd[1102]: time="2023-07-19T23:29:25.666819464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 23:29:25 image-052000 dockerd[1102]: time="2023-07-19T23:29:25.677996339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 23:29:25 image-052000 dockerd[1102]: time="2023-07-19T23:29:25.678105797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 23:29:25 image-052000 dockerd[1102]: time="2023-07-19T23:29:25.678147381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 23:29:25 image-052000 dockerd[1102]: time="2023-07-19T23:29:25.678184839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 23:29:32 image-052000 dockerd[1096]: time="2023-07-19T23:29:32.885692468Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Jul 19 23:29:32 image-052000 dockerd[1096]: time="2023-07-19T23:29:32.997621218Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Jul 19 23:29:33 image-052000 dockerd[1096]: time="2023-07-19T23:29:33.011137759Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Jul 19 23:29:33 image-052000 dockerd[1102]: time="2023-07-19T23:29:33.039269009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 23:29:33 image-052000 dockerd[1102]: time="2023-07-19T23:29:33.039299968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 23:29:33 image-052000 dockerd[1102]: time="2023-07-19T23:29:33.039308426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 23:29:33 image-052000 dockerd[1102]: time="2023-07-19T23:29:33.039314634Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 23:29:33 image-052000 dockerd[1096]: time="2023-07-19T23:29:33.173624426Z" level=info msg="ignoring event" container=0d31a268685747836744bf8892119448100cd210a73da84d85ad89c1ebf17047 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 23:29:33 image-052000 dockerd[1102]: time="2023-07-19T23:29:33.173907009Z" level=info msg="shim disconnected" id=0d31a268685747836744bf8892119448100cd210a73da84d85ad89c1ebf17047 namespace=moby
	Jul 19 23:29:33 image-052000 dockerd[1102]: time="2023-07-19T23:29:33.173960093Z" level=warning msg="cleaning up after shim disconnected" id=0d31a268685747836744bf8892119448100cd210a73da84d85ad89c1ebf17047 namespace=moby
	Jul 19 23:29:33 image-052000 dockerd[1102]: time="2023-07-19T23:29:33.173977426Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	46704fb13cfe7       bcb9e554eaab6       8 seconds ago       Running             kube-scheduler            0                   baaa3687ddc6c
	00d2c4826476b       ab3683b584ae5       8 seconds ago       Running             kube-controller-manager   0                   19479e96b2aee
	da483e2d21526       39dfb036b0986       8 seconds ago       Running             kube-apiserver            0                   20cc84457739c
	80d0986e74c45       24bc64e911039       8 seconds ago       Running             etcd                      0                   c1f51d9547dc3
	
	* 
	* ==> describe nodes <==
	* Name:               image-052000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=image-052000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd270bde4b7e946c995f2329996587ae45fe53d4
	                    minikube.k8s.io/name=image-052000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_19T16_29_30_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Jul 2023 23:29:27 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  image-052000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Jul 2023 23:29:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Jul 2023 23:29:33 +0000   Wed, 19 Jul 2023 23:29:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Jul 2023 23:29:33 +0000   Wed, 19 Jul 2023 23:29:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Jul 2023 23:29:33 +0000   Wed, 19 Jul 2023 23:29:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Jul 2023 23:29:33 +0000   Wed, 19 Jul 2023 23:29:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    image-052000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 b621ca76b404438faa9063f0604b5c6f
	  System UUID:                b621ca76b404438faa9063f0604b5c6f
	  Boot ID:                    3904dd0b-e8c8-4235-8599-d725b9123c60
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-image-052000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3s
	  kube-system                 kube-apiserver-image-052000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-controller-manager-image-052000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-scheduler-image-052000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (2%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From     Message
	  ----    ------                   ----             ----     -------
	  Normal  Starting                 9s               kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  9s               kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8s (x8 over 9s)  kubelet  Node image-052000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 9s)  kubelet  Node image-052000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 9s)  kubelet  Node image-052000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4s               kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  4s               kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s               kubelet  Node image-052000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s               kubelet  Node image-052000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s               kubelet  Node image-052000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                0s               kubelet  Node image-052000 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Jul19 23:29] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.653419] EINJ: EINJ table not found.
	[  +0.516708] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.042931] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000854] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.011776] systemd-fstab-generator[479]: Ignoring "noauto" for root device
	[  +0.074017] systemd-fstab-generator[490]: Ignoring "noauto" for root device
	[  +0.417194] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.172717] systemd-fstab-generator[705]: Ignoring "noauto" for root device
	[  +0.072081] systemd-fstab-generator[716]: Ignoring "noauto" for root device
	[  +0.080703] systemd-fstab-generator[729]: Ignoring "noauto" for root device
	[  +1.150690] kauditd_printk_skb: 17 callbacks suppressed
	[  +0.105720] systemd-fstab-generator[916]: Ignoring "noauto" for root device
	[  +0.076740] systemd-fstab-generator[927]: Ignoring "noauto" for root device
	[  +0.075656] systemd-fstab-generator[938]: Ignoring "noauto" for root device
	[  +0.076089] systemd-fstab-generator[949]: Ignoring "noauto" for root device
	[  +0.087745] systemd-fstab-generator[989]: Ignoring "noauto" for root device
	[  +2.493970] systemd-fstab-generator[1089]: Ignoring "noauto" for root device
	[  +3.710483] systemd-fstab-generator[1420]: Ignoring "noauto" for root device
	[  +0.253001] kauditd_printk_skb: 68 callbacks suppressed
	[  +4.874467] systemd-fstab-generator[2268]: Ignoring "noauto" for root device
	[  +3.262681] kauditd_printk_skb: 2 callbacks suppressed
	
	* 
	* ==> etcd [80d0986e74c4] <==
	* {"level":"info","ts":"2023-07-19T23:29:25.845Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-07-19T23:29:25.845Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"58de0efec1d86300","initial-advertise-peer-urls":["https://192.168.105.5:2380"],"listen-peer-urls":["https://192.168.105.5:2380"],"advertise-client-urls":["https://192.168.105.5:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.5:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-07-19T23:29:25.845Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-07-19T23:29:25.846Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856)"}
	{"level":"info","ts":"2023-07-19T23:29:25.846Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","added-peer-id":"58de0efec1d86300","added-peer-peer-urls":["https://192.168.105.5:2380"]}
	{"level":"info","ts":"2023-07-19T23:29:25.846Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-07-19T23:29:25.846Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-07-19T23:29:26.838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 is starting a new election at term 1"}
	{"level":"info","ts":"2023-07-19T23:29:26.838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-07-19T23:29:26.838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgPreVoteResp from 58de0efec1d86300 at term 1"}
	{"level":"info","ts":"2023-07-19T23:29:26.838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became candidate at term 2"}
	{"level":"info","ts":"2023-07-19T23:29:26.838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-07-19T23:29:26.838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 2"}
	{"level":"info","ts":"2023-07-19T23:29:26.838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-07-19T23:29:26.839Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:image-052000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2023-07-19T23:29:26.839Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-19T23:29:26.839Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-19T23:29:26.840Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-19T23:29:26.840Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-19T23:29:26.840Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-07-19T23:29:26.840Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-07-19T23:29:26.840Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.5:2379"}
	{"level":"info","ts":"2023-07-19T23:29:26.840Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-07-19T23:29:26.840Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-07-19T23:29:26.841Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  23:29:33 up 0 min,  0 users,  load average: 0.06, 0.02, 0.00
	Linux image-052000 5.10.57 #1 SMP PREEMPT Fri Jul 14 22:49:12 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [da483e2d2152] <==
	* I0719 23:29:27.572242       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0719 23:29:27.572286       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0719 23:29:27.575935       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[global-default system node-high leader-election workload-high catch-all workload-low] items=[{target:24 lowerBound:24 upperBound:649} {target:50 lowerBound:50 upperBound:674} {target:73 lowerBound:73 upperBound:698} {target:25 lowerBound:25 upperBound:625} {target:49 lowerBound:49 upperBound:698} {target:NaN lowerBound:13 upperBound:613} {target:24 lowerBound:24 upperBound:845}]
	E0719 23:29:27.579168       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[global-default catch-all system node-high leader-election workload-high workload-low] items=[{target:24 lowerBound:24 upperBound:649} {target:NaN lowerBound:13 upperBound:613} {target:50 lowerBound:50 upperBound:674} {target:73 lowerBound:73 upperBound:698} {target:25 lowerBound:25 upperBound:625} {target:49 lowerBound:49 upperBound:698} {target:24 lowerBound:24 upperBound:845}]
	E0719 23:29:27.583187       1 apf_controller.go:411] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=[catch-all system node-high leader-election workload-high workload-low global-default] items=[{target:NaN lowerBound:13 upperBound:613} {target:50 lowerBound:50 upperBound:674} {target:73 lowerBound:73 upperBound:698} {target:25 lowerBound:25 upperBound:625} {target:49 lowerBound:49 upperBound:698} {target:24 lowerBound:24 upperBound:845} {target:24 lowerBound:24 upperBound:649}]
	I0719 23:29:27.609870       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0719 23:29:27.609949       1 aggregator.go:152] initial CRD sync complete...
	I0719 23:29:27.609995       1 autoregister_controller.go:141] Starting autoregister controller
	I0719 23:29:27.610036       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0719 23:29:27.610068       1 cache.go:39] Caches are synced for autoregister controller
	I0719 23:29:27.621749       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0719 23:29:28.322800       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0719 23:29:28.455473       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0719 23:29:28.458515       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0719 23:29:28.458525       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0719 23:29:28.621352       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0719 23:29:28.632755       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0719 23:29:28.724836       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0719 23:29:28.727996       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0719 23:29:28.728540       1 controller.go:624] quota admission added evaluator for: endpoints
	I0719 23:29:28.731126       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0719 23:29:29.521890       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0719 23:29:29.874658       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0719 23:29:29.879072       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0719 23:29:29.883126       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [00d2c4826476] <==
	* I0719 23:29:30.024686       1 controllermanager.go:638] "Started controller" controller="csrsigning"
	I0719 23:29:30.024711       1 certificate_controller.go:112] Starting certificate controller "csrsigning-legacy-unknown"
	I0719 23:29:30.024714       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0719 23:29:30.024722       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0719 23:29:30.171113       1 controllermanager.go:638] "Started controller" controller="endpointslice"
	I0719 23:29:30.171190       1 endpointslice_controller.go:252] Starting endpoint slice controller
	I0719 23:29:30.171198       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
	I0719 23:29:30.323684       1 controllermanager.go:638] "Started controller" controller="podgc"
	I0719 23:29:30.323732       1 gc_controller.go:103] Starting GC controller
	I0719 23:29:30.323739       1 shared_informer.go:311] Waiting for caches to sync for GC
	I0719 23:29:30.573768       1 controllermanager.go:638] "Started controller" controller="namespace"
	I0719 23:29:30.573796       1 namespace_controller.go:197] "Starting namespace controller"
	I0719 23:29:30.573800       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0719 23:29:30.721593       1 controllermanager.go:638] "Started controller" controller="serviceaccount"
	I0719 23:29:30.721666       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0719 23:29:30.721672       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0719 23:29:30.870601       1 controllermanager.go:638] "Started controller" controller="root-ca-cert-publisher"
	I0719 23:29:30.870633       1 publisher.go:101] Starting root CA certificate configmap publisher
	I0719 23:29:30.870638       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0719 23:29:31.021429       1 controllermanager.go:638] "Started controller" controller="endpoint"
	I0719 23:29:31.021466       1 endpoints_controller.go:172] Starting endpoint controller
	I0719 23:29:31.021470       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	I0719 23:29:31.171413       1 controllermanager.go:638] "Started controller" controller="ttl"
	I0719 23:29:31.171451       1 ttl_controller.go:124] "Starting TTL controller"
	I0719 23:29:31.171457       1 shared_informer.go:311] Waiting for caches to sync for TTL
	
	* 
	* ==> kube-scheduler [46704fb13cfe] <==
	* W0719 23:29:27.538466       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 23:29:27.538469       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0719 23:29:27.538481       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 23:29:27.538485       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0719 23:29:27.538507       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 23:29:27.538514       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0719 23:29:27.538526       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 23:29:27.538530       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0719 23:29:27.538541       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 23:29:27.538545       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0719 23:29:27.538556       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0719 23:29:27.538559       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0719 23:29:27.538582       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 23:29:27.538586       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0719 23:29:27.538597       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 23:29:27.538601       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0719 23:29:27.538615       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 23:29:27.538621       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0719 23:29:27.538633       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 23:29:27.538637       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0719 23:29:27.538651       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0719 23:29:27.538657       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0719 23:29:28.521712       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 23:29:28.521731       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0719 23:29:28.935842       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-07-19 23:29:13 UTC, ends at Wed 2023-07-19 23:29:33 UTC. --
	Jul 19 23:29:30 image-052000 kubelet[2274]: I0719 23:29:30.030918    2274 topology_manager.go:212] "Topology Admit Handler"
	Jul 19 23:29:30 image-052000 kubelet[2274]: I0719 23:29:30.030934    2274 topology_manager.go:212] "Topology Admit Handler"
	Jul 19 23:29:30 image-052000 kubelet[2274]: I0719 23:29:30.036122    2274 kubelet_node_status.go:108] "Node was previously registered" node="image-052000"
	Jul 19 23:29:30 image-052000 kubelet[2274]: I0719 23:29:30.036221    2274 kubelet_node_status.go:73] "Successfully registered node" node="image-052000"
	Jul 19 23:29:30 image-052000 kubelet[2274]: I0719 23:29:30.128422    2274 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fbeeb4bf19cb349660e7b137fe895439-flexvolume-dir\") pod \"kube-controller-manager-image-052000\" (UID: \"fbeeb4bf19cb349660e7b137fe895439\") " pod="kube-system/kube-controller-manager-image-052000"
	Jul 19 23:29:30 image-052000 kubelet[2274]: I0719 23:29:30.128446    2274 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fbeeb4bf19cb349660e7b137fe895439-usr-share-ca-certificates\") pod \"kube-controller-manager-image-052000\" (UID: \"fbeeb4bf19cb349660e7b137fe895439\") " pod="kube-system/kube-controller-manager-image-052000"
	Jul 19 23:29:30 image-052000 kubelet[2274]: I0719 23:29:30.128458    2274 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0f2c53b2c3ded7faaae03dfe57802773-ca-certs\") pod \"kube-apiserver-image-052000\" (UID: \"0f2c53b2c3ded7faaae03dfe57802773\") " pod="kube-system/kube-apiserver-image-052000"
	Jul 19 23:29:30 image-052000 kubelet[2274]: I0719 23:29:30.128468    2274 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0f2c53b2c3ded7faaae03dfe57802773-k8s-certs\") pod \"kube-apiserver-image-052000\" (UID: \"0f2c53b2c3ded7faaae03dfe57802773\") " pod="kube-system/kube-apiserver-image-052000"
	Jul 19 23:29:30 image-052000 kubelet[2274]: I0719 23:29:30.128480    2274 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0f2c53b2c3ded7faaae03dfe57802773-usr-share-ca-certificates\") pod \"kube-apiserver-image-052000\" (UID: \"0f2c53b2c3ded7faaae03dfe57802773\") " pod="kube-system/kube-apiserver-image-052000"
	Jul 19 23:29:30 image-052000 kubelet[2274]: I0719 23:29:30.128489    2274 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fbeeb4bf19cb349660e7b137fe895439-ca-certs\") pod \"kube-controller-manager-image-052000\" (UID: \"fbeeb4bf19cb349660e7b137fe895439\") " pod="kube-system/kube-controller-manager-image-052000"
	Jul 19 23:29:30 image-052000 kubelet[2274]: I0719 23:29:30.128499    2274 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fbeeb4bf19cb349660e7b137fe895439-k8s-certs\") pod \"kube-controller-manager-image-052000\" (UID: \"fbeeb4bf19cb349660e7b137fe895439\") " pod="kube-system/kube-controller-manager-image-052000"
	Jul 19 23:29:30 image-052000 kubelet[2274]: I0719 23:29:30.128509    2274 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fbeeb4bf19cb349660e7b137fe895439-kubeconfig\") pod \"kube-controller-manager-image-052000\" (UID: \"fbeeb4bf19cb349660e7b137fe895439\") " pod="kube-system/kube-controller-manager-image-052000"
	Jul 19 23:29:30 image-052000 kubelet[2274]: I0719 23:29:30.128521    2274 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/320e482438c081d98a1b86ac8f43024a-kubeconfig\") pod \"kube-scheduler-image-052000\" (UID: \"320e482438c081d98a1b86ac8f43024a\") " pod="kube-system/kube-scheduler-image-052000"
	Jul 19 23:29:30 image-052000 kubelet[2274]: I0719 23:29:30.128531    2274 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/9fa7b308d359da6f53242cfbe6231f4c-etcd-certs\") pod \"etcd-image-052000\" (UID: \"9fa7b308d359da6f53242cfbe6231f4c\") " pod="kube-system/etcd-image-052000"
	Jul 19 23:29:30 image-052000 kubelet[2274]: I0719 23:29:30.128541    2274 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/9fa7b308d359da6f53242cfbe6231f4c-etcd-data\") pod \"etcd-image-052000\" (UID: \"9fa7b308d359da6f53242cfbe6231f4c\") " pod="kube-system/etcd-image-052000"
	Jul 19 23:29:30 image-052000 kubelet[2274]: I0719 23:29:30.911156    2274 apiserver.go:52] "Watching apiserver"
	Jul 19 23:29:30 image-052000 kubelet[2274]: I0719 23:29:30.927093    2274 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	Jul 19 23:29:30 image-052000 kubelet[2274]: I0719 23:29:30.932357    2274 reconciler.go:41] "Reconciler: start to sync state"
	Jul 19 23:29:30 image-052000 kubelet[2274]: E0719 23:29:30.984072    2274 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-image-052000\" already exists" pod="kube-system/kube-apiserver-image-052000"
	Jul 19 23:29:30 image-052000 kubelet[2274]: E0719 23:29:30.984199    2274 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-image-052000\" already exists" pod="kube-system/kube-controller-manager-image-052000"
	Jul 19 23:29:30 image-052000 kubelet[2274]: I0719 23:29:30.989494    2274 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-image-052000" podStartSLOduration=0.989470383 podCreationTimestamp="2023-07-19 23:29:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-19 23:29:30.984124883 +0000 UTC m=+1.122058710" watchObservedRunningTime="2023-07-19 23:29:30.989470383 +0000 UTC m=+1.127404210"
	Jul 19 23:29:30 image-052000 kubelet[2274]: I0719 23:29:30.995246    2274 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-image-052000" podStartSLOduration=0.995225883 podCreationTimestamp="2023-07-19 23:29:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-19 23:29:30.989635633 +0000 UTC m=+1.127569460" watchObservedRunningTime="2023-07-19 23:29:30.995225883 +0000 UTC m=+1.133159710"
	Jul 19 23:29:30 image-052000 kubelet[2274]: I0719 23:29:30.999302    2274 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-image-052000" podStartSLOduration=0.999290008 podCreationTimestamp="2023-07-19 23:29:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-19 23:29:30.999222717 +0000 UTC m=+1.137156543" watchObservedRunningTime="2023-07-19 23:29:30.999290008 +0000 UTC m=+1.137223835"
	Jul 19 23:29:30 image-052000 kubelet[2274]: I0719 23:29:30.999382    2274 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-image-052000" podStartSLOduration=0.999374842 podCreationTimestamp="2023-07-19 23:29:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-07-19 23:29:30.995367758 +0000 UTC m=+1.133301585" watchObservedRunningTime="2023-07-19 23:29:30.999374842 +0000 UTC m=+1.137308627"
	Jul 19 23:29:33 image-052000 kubelet[2274]: I0719 23:29:33.584237    2274 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p image-052000 -n image-052000
helpers_test.go:261: (dbg) Run:  kubectl --context image-052000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestImageBuild/serial/BuildWithBuildArg]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context image-052000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context image-052000 describe pod storage-provisioner: exit status 1 (37.975167ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context image-052000 describe pod storage-provisioner: exit status 1
--- FAIL: TestImageBuild/serial/BuildWithBuildArg (1.04s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (56.28s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-442000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E0719 16:31:14.067262    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt: no such file or directory
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-442000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (18.201711167s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-442000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-442000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a2fc9b69-f446-406a-9206-9777a6bb4619] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a2fc9b69-f446-406a-9206-9777a6bb4619] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.015442875s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-442000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-442000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-442000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.105.6
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.6: exit status 1 (15.033533584s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.6" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-442000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-442000 addons disable ingress-dns --alsologtostderr -v=1: (5.641372541s)
addons_test.go:287: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-442000 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-442000 addons disable ingress --alsologtostderr -v=1: (7.115820834s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-442000 -n ingress-addon-legacy-442000
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-442000 logs -n 25
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| dashboard      | --url --port 36195                       | functional-001000           | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | -p functional-001000                     |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	| image          | functional-001000                        | functional-001000           | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | image ls --format short                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-001000                        | functional-001000           | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | image ls --format yaml                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| ssh            | functional-001000 ssh pgrep              | functional-001000           | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT |                     |
	|                | buildkitd                                |                             |         |         |                     |                     |
	| image          | functional-001000 image build -t         | functional-001000           | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | localhost/my-image:functional-001000     |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                             |         |         |                     |                     |
	| image          | functional-001000 image ls               | functional-001000           | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	| image          | functional-001000                        | functional-001000           | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | image ls --format json                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-001000                        | functional-001000           | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | image ls --format table                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| update-context | functional-001000                        | functional-001000           | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-001000                        | functional-001000           | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-001000                        | functional-001000           | jenkins | v1.31.0 | 19 Jul 23 16:28 PDT | 19 Jul 23 16:28 PDT |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| delete         | -p functional-001000                     | functional-001000           | jenkins | v1.31.0 | 19 Jul 23 16:29 PDT | 19 Jul 23 16:29 PDT |
	| start          | -p image-052000 --driver=qemu2           | image-052000                | jenkins | v1.31.0 | 19 Jul 23 16:29 PDT | 19 Jul 23 16:29 PDT |
	|                |                                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-052000                | jenkins | v1.31.0 | 19 Jul 23 16:29 PDT | 19 Jul 23 16:29 PDT |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | -p image-052000                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-052000                | jenkins | v1.31.0 | 19 Jul 23 16:29 PDT | 19 Jul 23 16:29 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                             |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                             |         |         |                     |                     |
	|                | image-052000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-052000                | jenkins | v1.31.0 | 19 Jul 23 16:29 PDT | 19 Jul 23 16:29 PDT |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | --build-opt=no-cache -p                  |                             |         |         |                     |                     |
	|                | image-052000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-052000                | jenkins | v1.31.0 | 19 Jul 23 16:29 PDT | 19 Jul 23 16:29 PDT |
	|                | -f inner/Dockerfile                      |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-f            |                             |         |         |                     |                     |
	|                | -p image-052000                          |                             |         |         |                     |                     |
	| delete         | -p image-052000                          | image-052000                | jenkins | v1.31.0 | 19 Jul 23 16:29 PDT | 19 Jul 23 16:29 PDT |
	| start          | -p ingress-addon-legacy-442000           | ingress-addon-legacy-442000 | jenkins | v1.31.0 | 19 Jul 23 16:29 PDT | 19 Jul 23 16:30 PDT |
	|                | --kubernetes-version=v1.18.20            |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	|                | --driver=qemu2                           |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-442000              | ingress-addon-legacy-442000 | jenkins | v1.31.0 | 19 Jul 23 16:30 PDT | 19 Jul 23 16:31 PDT |
	|                | addons enable ingress                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-442000              | ingress-addon-legacy-442000 | jenkins | v1.31.0 | 19 Jul 23 16:31 PDT | 19 Jul 23 16:31 PDT |
	|                | addons enable ingress-dns                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-442000              | ingress-addon-legacy-442000 | jenkins | v1.31.0 | 19 Jul 23 16:31 PDT | 19 Jul 23 16:31 PDT |
	|                | ssh curl -s http://127.0.0.1/            |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'             |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-442000 ip           | ingress-addon-legacy-442000 | jenkins | v1.31.0 | 19 Jul 23 16:31 PDT | 19 Jul 23 16:31 PDT |
	| addons         | ingress-addon-legacy-442000              | ingress-addon-legacy-442000 | jenkins | v1.31.0 | 19 Jul 23 16:31 PDT | 19 Jul 23 16:31 PDT |
	|                | addons disable ingress-dns               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-442000              | ingress-addon-legacy-442000 | jenkins | v1.31.0 | 19 Jul 23 16:31 PDT | 19 Jul 23 16:31 PDT |
	|                | addons disable ingress                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/19 16:29:34
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 16:29:34.583833    2863 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:29:34.583949    2863 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:29:34.583952    2863 out.go:309] Setting ErrFile to fd 2...
	I0719 16:29:34.583955    2863 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:29:34.584060    2863 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:29:34.585118    2863 out.go:303] Setting JSON to false
	I0719 16:29:34.600141    2863 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3545,"bootTime":1689805829,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:29:34.600214    2863 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:29:34.603754    2863 out.go:177] * [ingress-addon-legacy-442000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:29:34.606968    2863 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:29:34.607067    2863 notify.go:220] Checking for updates...
	I0719 16:29:34.610871    2863 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:29:34.614835    2863 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:29:34.617888    2863 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:29:34.620974    2863 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:29:34.623939    2863 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:29:34.627123    2863 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:29:34.630901    2863 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 16:29:34.637901    2863 start.go:298] selected driver: qemu2
	I0719 16:29:34.637906    2863 start.go:880] validating driver "qemu2" against <nil>
	I0719 16:29:34.637914    2863 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:29:34.639767    2863 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0719 16:29:34.642975    2863 out.go:177] * Automatically selected the socket_vmnet network
	I0719 16:29:34.645973    2863 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 16:29:34.645994    2863 cni.go:84] Creating CNI manager for ""
	I0719 16:29:34.646004    2863 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0719 16:29:34.646009    2863 start_flags.go:319] config:
	{Name:ingress-addon-legacy-442000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-442000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentP
ID:0}
	I0719 16:29:34.650189    2863 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:29:34.656939    2863 out.go:177] * Starting control plane node ingress-addon-legacy-442000 in cluster ingress-addon-legacy-442000
	I0719 16:29:34.660864    2863 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0719 16:29:34.860881    2863 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0719 16:29:34.860959    2863 cache.go:57] Caching tarball of preloaded images
	I0719 16:29:34.861725    2863 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0719 16:29:34.866794    2863 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0719 16:29:34.874657    2863 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0719 16:29:35.096921    2863 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0719 16:29:45.262147    2863 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0719 16:29:45.262294    2863 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0719 16:29:46.011387    2863 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0719 16:29:46.011578    2863 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/config.json ...
	I0719 16:29:46.011599    2863 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/config.json: {Name:mkafda09acdb971d898e7926fd227d3a75ee7645 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:29:46.011838    2863 start.go:365] acquiring machines lock for ingress-addon-legacy-442000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:29:46.011872    2863 start.go:369] acquired machines lock for "ingress-addon-legacy-442000" in 24µs
	I0719 16:29:46.011889    2863 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-442000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterNam
e:ingress-addon-legacy-442000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:29:46.011920    2863 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:29:46.016959    2863 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0719 16:29:46.031560    2863 start.go:159] libmachine.API.Create for "ingress-addon-legacy-442000" (driver="qemu2")
	I0719 16:29:46.031586    2863 client.go:168] LocalClient.Create starting
	I0719 16:29:46.031665    2863 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:29:46.031687    2863 main.go:141] libmachine: Decoding PEM data...
	I0719 16:29:46.031697    2863 main.go:141] libmachine: Parsing certificate...
	I0719 16:29:46.031745    2863 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:29:46.031760    2863 main.go:141] libmachine: Decoding PEM data...
	I0719 16:29:46.031768    2863 main.go:141] libmachine: Parsing certificate...
	I0719 16:29:46.032104    2863 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:29:46.142872    2863 main.go:141] libmachine: Creating SSH key...
	I0719 16:29:46.258908    2863 main.go:141] libmachine: Creating Disk image...
	I0719 16:29:46.258914    2863 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:29:46.259049    2863 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/ingress-addon-legacy-442000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/ingress-addon-legacy-442000/disk.qcow2
	I0719 16:29:46.267515    2863 main.go:141] libmachine: STDOUT: 
	I0719 16:29:46.267530    2863 main.go:141] libmachine: STDERR: 
	I0719 16:29:46.267584    2863 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/ingress-addon-legacy-442000/disk.qcow2 +20000M
	I0719 16:29:46.274720    2863 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:29:46.274736    2863 main.go:141] libmachine: STDERR: 
	I0719 16:29:46.274749    2863 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/ingress-addon-legacy-442000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/ingress-addon-legacy-442000/disk.qcow2
	I0719 16:29:46.274756    2863 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:29:46.274796    2863 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/ingress-addon-legacy-442000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/ingress-addon-legacy-442000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/ingress-addon-legacy-442000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:e7:de:16:45:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/ingress-addon-legacy-442000/disk.qcow2
	I0719 16:29:46.308821    2863 main.go:141] libmachine: STDOUT: 
	I0719 16:29:46.308845    2863 main.go:141] libmachine: STDERR: 
	I0719 16:29:46.308850    2863 main.go:141] libmachine: Attempt 0
	I0719 16:29:46.308861    2863 main.go:141] libmachine: Searching for aa:e7:de:16:45:92 in /var/db/dhcpd_leases ...
	I0719 16:29:46.308933    2863 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0719 16:29:46.308954    2863 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:de:1f:e5:ae:4e ID:1,6a:de:1f:e5:ae:4e Lease:0x64b9c349}
	I0719 16:29:46.308962    2863 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:12:80:ec:16:f5:bf ID:1,12:80:ec:16:f5:bf Lease:0x64b9c27d}
	I0719 16:29:46.308968    2863 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:fa:a0:fd:4f:11:83 ID:1,fa:a0:fd:4f:11:83 Lease:0x64b870f1}
	I0719 16:29:46.308972    2863 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:3a:2:96:5:da ID:1,36:3a:2:96:5:da Lease:0x64b870c5}
	I0719 16:29:48.311080    2863 main.go:141] libmachine: Attempt 1
	I0719 16:29:48.311191    2863 main.go:141] libmachine: Searching for aa:e7:de:16:45:92 in /var/db/dhcpd_leases ...
	I0719 16:29:48.311462    2863 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0719 16:29:48.311511    2863 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:de:1f:e5:ae:4e ID:1,6a:de:1f:e5:ae:4e Lease:0x64b9c349}
	I0719 16:29:48.311565    2863 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:12:80:ec:16:f5:bf ID:1,12:80:ec:16:f5:bf Lease:0x64b9c27d}
	I0719 16:29:48.311599    2863 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:fa:a0:fd:4f:11:83 ID:1,fa:a0:fd:4f:11:83 Lease:0x64b870f1}
	I0719 16:29:48.311632    2863 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:3a:2:96:5:da ID:1,36:3a:2:96:5:da Lease:0x64b870c5}
	I0719 16:29:50.313728    2863 main.go:141] libmachine: Attempt 2
	I0719 16:29:50.313759    2863 main.go:141] libmachine: Searching for aa:e7:de:16:45:92 in /var/db/dhcpd_leases ...
	I0719 16:29:50.313922    2863 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0719 16:29:50.313935    2863 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:de:1f:e5:ae:4e ID:1,6a:de:1f:e5:ae:4e Lease:0x64b9c349}
	I0719 16:29:50.313941    2863 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:12:80:ec:16:f5:bf ID:1,12:80:ec:16:f5:bf Lease:0x64b9c27d}
	I0719 16:29:50.313953    2863 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:fa:a0:fd:4f:11:83 ID:1,fa:a0:fd:4f:11:83 Lease:0x64b870f1}
	I0719 16:29:50.313958    2863 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:3a:2:96:5:da ID:1,36:3a:2:96:5:da Lease:0x64b870c5}
	I0719 16:29:52.315949    2863 main.go:141] libmachine: Attempt 3
	I0719 16:29:52.315957    2863 main.go:141] libmachine: Searching for aa:e7:de:16:45:92 in /var/db/dhcpd_leases ...
	I0719 16:29:52.315988    2863 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0719 16:29:52.315996    2863 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:de:1f:e5:ae:4e ID:1,6a:de:1f:e5:ae:4e Lease:0x64b9c349}
	I0719 16:29:52.316001    2863 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:12:80:ec:16:f5:bf ID:1,12:80:ec:16:f5:bf Lease:0x64b9c27d}
	I0719 16:29:52.316006    2863 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:fa:a0:fd:4f:11:83 ID:1,fa:a0:fd:4f:11:83 Lease:0x64b870f1}
	I0719 16:29:52.316014    2863 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:3a:2:96:5:da ID:1,36:3a:2:96:5:da Lease:0x64b870c5}
	I0719 16:29:54.317997    2863 main.go:141] libmachine: Attempt 4
	I0719 16:29:54.318006    2863 main.go:141] libmachine: Searching for aa:e7:de:16:45:92 in /var/db/dhcpd_leases ...
	I0719 16:29:54.318074    2863 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0719 16:29:54.318081    2863 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:de:1f:e5:ae:4e ID:1,6a:de:1f:e5:ae:4e Lease:0x64b9c349}
	I0719 16:29:54.318087    2863 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:12:80:ec:16:f5:bf ID:1,12:80:ec:16:f5:bf Lease:0x64b9c27d}
	I0719 16:29:54.318093    2863 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:fa:a0:fd:4f:11:83 ID:1,fa:a0:fd:4f:11:83 Lease:0x64b870f1}
	I0719 16:29:54.318098    2863 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:3a:2:96:5:da ID:1,36:3a:2:96:5:da Lease:0x64b870c5}
	I0719 16:29:56.320132    2863 main.go:141] libmachine: Attempt 5
	I0719 16:29:56.320168    2863 main.go:141] libmachine: Searching for aa:e7:de:16:45:92 in /var/db/dhcpd_leases ...
	I0719 16:29:56.320246    2863 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0719 16:29:56.320256    2863 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:6a:de:1f:e5:ae:4e ID:1,6a:de:1f:e5:ae:4e Lease:0x64b9c349}
	I0719 16:29:56.320270    2863 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:12:80:ec:16:f5:bf ID:1,12:80:ec:16:f5:bf Lease:0x64b9c27d}
	I0719 16:29:56.320275    2863 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:fa:a0:fd:4f:11:83 ID:1,fa:a0:fd:4f:11:83 Lease:0x64b870f1}
	I0719 16:29:56.320280    2863 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:36:3a:2:96:5:da ID:1,36:3a:2:96:5:da Lease:0x64b870c5}
	I0719 16:29:58.322318    2863 main.go:141] libmachine: Attempt 6
	I0719 16:29:58.322370    2863 main.go:141] libmachine: Searching for aa:e7:de:16:45:92 in /var/db/dhcpd_leases ...
	I0719 16:29:58.322522    2863 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0719 16:29:58.322541    2863 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:aa:e7:de:16:45:92 ID:1,aa:e7:de:16:45:92 Lease:0x64b9c375}
	I0719 16:29:58.322548    2863 main.go:141] libmachine: Found match: aa:e7:de:16:45:92
	I0719 16:29:58.322560    2863 main.go:141] libmachine: IP: 192.168.105.6
	I0719 16:29:58.322568    2863 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I0719 16:29:59.329532    2863 machine.go:88] provisioning docker machine ...
	I0719 16:29:59.329556    2863 buildroot.go:166] provisioning hostname "ingress-addon-legacy-442000"
	I0719 16:29:59.329608    2863 main.go:141] libmachine: Using SSH client type: native
	I0719 16:29:59.329876    2863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10508d170] 0x10508fbd0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0719 16:29:59.329885    2863 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-442000 && echo "ingress-addon-legacy-442000" | sudo tee /etc/hostname
	I0719 16:29:59.405084    2863 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-442000
	
	I0719 16:29:59.405149    2863 main.go:141] libmachine: Using SSH client type: native
	I0719 16:29:59.405394    2863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10508d170] 0x10508fbd0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0719 16:29:59.405403    2863 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-442000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-442000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-442000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0719 16:29:59.478990    2863 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0719 16:29:59.479005    2863 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15585-1056/.minikube CaCertPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15585-1056/.minikube}
	I0719 16:29:59.479013    2863 buildroot.go:174] setting up certificates
	I0719 16:29:59.479020    2863 provision.go:83] configureAuth start
	I0719 16:29:59.479027    2863 provision.go:138] copyHostCerts
	I0719 16:29:59.479058    2863 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.pem
	I0719 16:29:59.479116    2863 exec_runner.go:144] found /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.pem, removing ...
	I0719 16:29:59.479122    2863 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.pem
	I0719 16:29:59.479262    2863 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.pem (1082 bytes)
	I0719 16:29:59.479440    2863 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cert.pem
	I0719 16:29:59.479467    2863 exec_runner.go:144] found /Users/jenkins/minikube-integration/15585-1056/.minikube/cert.pem, removing ...
	I0719 16:29:59.479470    2863 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15585-1056/.minikube/cert.pem
	I0719 16:29:59.479542    2863 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15585-1056/.minikube/cert.pem (1123 bytes)
	I0719 16:29:59.479621    2863 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15585-1056/.minikube/key.pem
	I0719 16:29:59.479650    2863 exec_runner.go:144] found /Users/jenkins/minikube-integration/15585-1056/.minikube/key.pem, removing ...
	I0719 16:29:59.479654    2863 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15585-1056/.minikube/key.pem
	I0719 16:29:59.479696    2863 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15585-1056/.minikube/key.pem (1675 bytes)
	I0719 16:29:59.479800    2863 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-442000 san=[192.168.105.6 192.168.105.6 localhost 127.0.0.1 minikube ingress-addon-legacy-442000]
	I0719 16:29:59.701537    2863 provision.go:172] copyRemoteCerts
	I0719 16:29:59.701605    2863 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0719 16:29:59.701625    2863 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/ingress-addon-legacy-442000/id_rsa Username:docker}
	I0719 16:29:59.739856    2863 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0719 16:29:59.739904    2863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0719 16:29:59.747340    2863 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0719 16:29:59.747373    2863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0719 16:29:59.754968    2863 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0719 16:29:59.755027    2863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0719 16:29:59.762341    2863 provision.go:86] duration metric: configureAuth took 283.313167ms
	I0719 16:29:59.762348    2863 buildroot.go:189] setting minikube options for container-runtime
	I0719 16:29:59.762468    2863 config.go:182] Loaded profile config "ingress-addon-legacy-442000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0719 16:29:59.762504    2863 main.go:141] libmachine: Using SSH client type: native
	I0719 16:29:59.762732    2863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10508d170] 0x10508fbd0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0719 16:29:59.762737    2863 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0719 16:29:59.833786    2863 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0719 16:29:59.833793    2863 buildroot.go:70] root file system type: tmpfs
	I0719 16:29:59.833854    2863 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0719 16:29:59.833903    2863 main.go:141] libmachine: Using SSH client type: native
	I0719 16:29:59.834141    2863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10508d170] 0x10508fbd0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0719 16:29:59.834181    2863 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0719 16:29:59.909403    2863 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0719 16:29:59.909453    2863 main.go:141] libmachine: Using SSH client type: native
	I0719 16:29:59.909695    2863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10508d170] 0x10508fbd0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0719 16:29:59.909708    2863 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0719 16:30:00.249577    2863 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0719 16:30:00.249593    2863 machine.go:91] provisioned docker machine in 920.067125ms
	I0719 16:30:00.249597    2863 client.go:171] LocalClient.Create took 14.218257583s
	I0719 16:30:00.249619    2863 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-442000" took 14.218315375s
	I0719 16:30:00.249624    2863 start.go:300] post-start starting for "ingress-addon-legacy-442000" (driver="qemu2")
	I0719 16:30:00.249629    2863 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0719 16:30:00.249702    2863 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0719 16:30:00.249716    2863 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/ingress-addon-legacy-442000/id_rsa Username:docker}
	I0719 16:30:00.287313    2863 ssh_runner.go:195] Run: cat /etc/os-release
	I0719 16:30:00.288639    2863 info.go:137] Remote host: Buildroot 2021.02.12
	I0719 16:30:00.288646    2863 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15585-1056/.minikube/addons for local assets ...
	I0719 16:30:00.288697    2863 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15585-1056/.minikube/files for local assets ...
	I0719 16:30:00.288798    2863 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15585-1056/.minikube/files/etc/ssl/certs/14702.pem -> 14702.pem in /etc/ssl/certs
	I0719 16:30:00.288803    2863 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15585-1056/.minikube/files/etc/ssl/certs/14702.pem -> /etc/ssl/certs/14702.pem
	I0719 16:30:00.288906    2863 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0719 16:30:00.291507    2863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/files/etc/ssl/certs/14702.pem --> /etc/ssl/certs/14702.pem (1708 bytes)
	I0719 16:30:00.298755    2863 start.go:303] post-start completed in 49.126959ms
	I0719 16:30:00.299118    2863 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/config.json ...
	I0719 16:30:00.299278    2863 start.go:128] duration metric: createHost completed in 14.287610125s
	I0719 16:30:00.299307    2863 main.go:141] libmachine: Using SSH client type: native
	I0719 16:30:00.299524    2863 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10508d170] 0x10508fbd0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0719 16:30:00.299528    2863 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0719 16:30:00.372258    2863 main.go:141] libmachine: SSH cmd err, output: <nil>: 1689809400.673456293
	
	I0719 16:30:00.372266    2863 fix.go:206] guest clock: 1689809400.673456293
	I0719 16:30:00.372271    2863 fix.go:219] Guest: 2023-07-19 16:30:00.673456293 -0700 PDT Remote: 2023-07-19 16:30:00.299283 -0700 PDT m=+25.734306126 (delta=374.173293ms)
	I0719 16:30:00.372281    2863 fix.go:190] guest clock delta is within tolerance: 374.173293ms
	I0719 16:30:00.372284    2863 start.go:83] releasing machines lock for "ingress-addon-legacy-442000", held for 14.360664459s
	I0719 16:30:00.372648    2863 ssh_runner.go:195] Run: cat /version.json
	I0719 16:30:00.372658    2863 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/ingress-addon-legacy-442000/id_rsa Username:docker}
	I0719 16:30:00.372707    2863 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0719 16:30:00.372730    2863 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/ingress-addon-legacy-442000/id_rsa Username:docker}
	I0719 16:30:00.455810    2863 ssh_runner.go:195] Run: systemctl --version
	I0719 16:30:00.457994    2863 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0719 16:30:00.459878    2863 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0719 16:30:00.459908    2863 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0719 16:30:00.463206    2863 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0719 16:30:00.468335    2863 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0719 16:30:00.468342    2863 start.go:466] detecting cgroup driver to use...
	I0719 16:30:00.468409    2863 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 16:30:00.475315    2863 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0719 16:30:00.479132    2863 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0719 16:30:00.482518    2863 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0719 16:30:00.482557    2863 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0719 16:30:00.485482    2863 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 16:30:00.488354    2863 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0719 16:30:00.491619    2863 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0719 16:30:00.495062    2863 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0719 16:30:00.498413    2863 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0719 16:30:00.501405    2863 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0719 16:30:00.504198    2863 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0719 16:30:00.507431    2863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 16:30:00.586585    2863 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0719 16:30:00.594776    2863 start.go:466] detecting cgroup driver to use...
	I0719 16:30:00.594839    2863 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0719 16:30:00.601512    2863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 16:30:00.607116    2863 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0719 16:30:00.614752    2863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0719 16:30:00.619637    2863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 16:30:00.624604    2863 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0719 16:30:00.687375    2863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0719 16:30:00.693332    2863 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0719 16:30:00.699273    2863 ssh_runner.go:195] Run: which cri-dockerd
	I0719 16:30:00.700785    2863 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0719 16:30:00.703511    2863 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0719 16:30:00.708429    2863 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0719 16:30:00.785795    2863 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0719 16:30:00.864852    2863 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0719 16:30:00.864868    2863 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0719 16:30:00.869883    2863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 16:30:00.954570    2863 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 16:30:02.118781    2863 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.164213166s)
	I0719 16:30:02.118869    2863 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 16:30:02.138476    2863 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0719 16:30:02.153418    2863 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.4 ...
	I0719 16:30:02.153517    2863 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0719 16:30:02.154963    2863 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 16:30:02.158878    2863 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0719 16:30:02.158928    2863 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 16:30:02.168234    2863 docker.go:636] Got preloaded images: 
	I0719 16:30:02.168245    2863 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0719 16:30:02.168280    2863 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 16:30:02.171658    2863 ssh_runner.go:195] Run: which lz4
	I0719 16:30:02.173184    2863 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0719 16:30:02.173274    2863 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0719 16:30:02.174505    2863 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0719 16:30:02.174520    2863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0719 16:30:03.830755    2863 docker.go:600] Took 1.657556 seconds to copy over tarball
	I0719 16:30:03.830814    2863 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0719 16:30:05.153813    2863 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.3230075s)
	I0719 16:30:05.153827    2863 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0719 16:30:05.176610    2863 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0719 16:30:05.180440    2863 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0719 16:30:05.188193    2863 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0719 16:30:05.265682    2863 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0719 16:30:06.765579    2863 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.499907625s)
	I0719 16:30:06.765667    2863 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0719 16:30:06.771679    2863 docker.go:636] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0719 16:30:06.771687    2863 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0719 16:30:06.771691    2863 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0719 16:30:06.813475    2863 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0719 16:30:06.814573    2863 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0719 16:30:06.816187    2863 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0719 16:30:06.816303    2863 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0719 16:30:06.816350    2863 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0719 16:30:06.816404    2863 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0719 16:30:06.816443    2863 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0719 16:30:06.816482    2863 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 16:30:06.819674    2863 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0719 16:30:06.819774    2863 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0719 16:30:06.819814    2863 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0719 16:30:06.822351    2863 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 16:30:06.822435    2863 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0719 16:30:06.822447    2863 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0719 16:30:06.822470    2863 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0719 16:30:06.822466    2863 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	W0719 16:30:08.009214    2863 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0719 16:30:08.009320    2863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0719 16:30:08.021568    2863 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0719 16:30:08.021591    2863 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0719 16:30:08.021631    2863 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0719 16:30:08.027560    2863 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0719 16:30:08.064117    2863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0719 16:30:08.070386    2863 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0719 16:30:08.070407    2863 docker.go:316] Removing image: registry.k8s.io/pause:3.2
	I0719 16:30:08.070445    2863 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0719 16:30:08.076695    2863 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W0719 16:30:08.111278    2863 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0719 16:30:08.111389    2863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0719 16:30:08.117533    2863 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0719 16:30:08.117555    2863 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.7
	I0719 16:30:08.117601    2863 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0719 16:30:08.123496    2863 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	W0719 16:30:08.271403    2863 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0719 16:30:08.271534    2863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0719 16:30:08.277854    2863 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0719 16:30:08.277879    2863 docker.go:316] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0719 16:30:08.277923    2863 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0719 16:30:08.283986    2863 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	W0719 16:30:08.459529    2863 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0719 16:30:08.459669    2863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0719 16:30:08.465499    2863 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0719 16:30:08.465521    2863 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0719 16:30:08.465559    2863 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0719 16:30:08.471559    2863 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	W0719 16:30:08.587554    2863 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0719 16:30:08.587720    2863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 16:30:08.597490    2863 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0719 16:30:08.597521    2863 docker.go:316] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 16:30:08.597584    2863 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 16:30:08.610529    2863 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	W0719 16:30:08.657826    2863 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0719 16:30:08.657962    2863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0719 16:30:08.666845    2863 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0719 16:30:08.666873    2863 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0719 16:30:08.666924    2863 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0719 16:30:08.685530    2863 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	W0719 16:30:08.861636    2863 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0719 16:30:08.862184    2863 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0719 16:30:08.885356    2863 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0719 16:30:08.885402    2863 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0719 16:30:08.885526    2863 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0719 16:30:08.899923    2863 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0719 16:30:08.900000    2863 cache_images.go:92] LoadImages completed in 2.128339583s
	W0719 16:30:08.900075    2863 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	I0719 16:30:08.900184    2863 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0719 16:30:08.917047    2863 cni.go:84] Creating CNI manager for ""
	I0719 16:30:08.917062    2863 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0719 16:30:08.917081    2863 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0719 16:30:08.917098    2863 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.6 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-442000 NodeName:ingress-addon-legacy-442000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0719 16:30:08.917259    2863 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-442000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0719 16:30:08.917345    2863 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-442000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-442000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0719 16:30:08.917426    2863 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0719 16:30:08.922996    2863 binaries.go:44] Found k8s binaries, skipping transfer
	I0719 16:30:08.923035    2863 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0719 16:30:08.927674    2863 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	I0719 16:30:08.934943    2863 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0719 16:30:08.941443    2863 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2127 bytes)
	I0719 16:30:08.947437    2863 ssh_runner.go:195] Run: grep 192.168.105.6	control-plane.minikube.internal$ /etc/hosts
	I0719 16:30:08.948853    2863 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0719 16:30:08.952663    2863 certs.go:56] Setting up /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000 for IP: 192.168.105.6
	I0719 16:30:08.952673    2863 certs.go:190] acquiring lock for shared ca certs: {Name:mk57268b94adc82cb06ba056d8f0acecf538b87f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:30:08.952813    2863 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.key
	I0719 16:30:08.952859    2863 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.key
	I0719 16:30:08.952887    2863 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/client.key
	I0719 16:30:08.952894    2863 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/client.crt with IP's: []
	I0719 16:30:09.025782    2863 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/client.crt ...
	I0719 16:30:09.025788    2863 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/client.crt: {Name:mk260742a2c3acd2bf4f095ec506c8f28cec4fd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:30:09.026020    2863 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/client.key ...
	I0719 16:30:09.026024    2863 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/client.key: {Name:mk9a447cf7db7ab41f8543f0f03af27ed94532fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:30:09.026142    2863 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/apiserver.key.b354f644
	I0719 16:30:09.026148    2863 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/apiserver.crt.b354f644 with IP's: [192.168.105.6 10.96.0.1 127.0.0.1 10.0.0.1]
	I0719 16:30:09.093390    2863 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/apiserver.crt.b354f644 ...
	I0719 16:30:09.093396    2863 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/apiserver.crt.b354f644: {Name:mk3a1cf4e61ff41d574c8e6fe0cfbf84fc1f7480 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:30:09.093523    2863 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/apiserver.key.b354f644 ...
	I0719 16:30:09.093526    2863 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/apiserver.key.b354f644: {Name:mk8f9c450b2aec6587e795e11ecebdc3f2ec49f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:30:09.093643    2863 certs.go:337] copying /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/apiserver.crt.b354f644 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/apiserver.crt
	I0719 16:30:09.093748    2863 certs.go:341] copying /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/apiserver.key.b354f644 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/apiserver.key
	I0719 16:30:09.093837    2863 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/proxy-client.key
	I0719 16:30:09.093843    2863 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/proxy-client.crt with IP's: []
	I0719 16:30:09.147806    2863 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/proxy-client.crt ...
	I0719 16:30:09.147809    2863 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/proxy-client.crt: {Name:mkcd4561cb6efa81f84c51d15ec2d7de055aec8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:30:09.147927    2863 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/proxy-client.key ...
	I0719 16:30:09.147929    2863 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/proxy-client.key: {Name:mk43c158feebbe3230c4992c15f29a8fd71d8f69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:30:09.148023    2863 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0719 16:30:09.148037    2863 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0719 16:30:09.148051    2863 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0719 16:30:09.148063    2863 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0719 16:30:09.148074    2863 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0719 16:30:09.148088    2863 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0719 16:30:09.148102    2863 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0719 16:30:09.148118    2863 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0719 16:30:09.148195    2863 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/1470.pem (1338 bytes)
	W0719 16:30:09.148220    2863 certs.go:433] ignoring /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/1470_empty.pem, impossibly tiny 0 bytes
	I0719 16:30:09.148227    2863 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca-key.pem (1675 bytes)
	I0719 16:30:09.148247    2863 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem (1082 bytes)
	I0719 16:30:09.148267    2863 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem (1123 bytes)
	I0719 16:30:09.148288    2863 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/certs/key.pem (1675 bytes)
	I0719 16:30:09.148333    2863 certs.go:437] found cert: /Users/jenkins/minikube-integration/15585-1056/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15585-1056/.minikube/files/etc/ssl/certs/14702.pem (1708 bytes)
	I0719 16:30:09.148353    2863 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0719 16:30:09.148364    2863 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/1470.pem -> /usr/share/ca-certificates/1470.pem
	I0719 16:30:09.148377    2863 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15585-1056/.minikube/files/etc/ssl/certs/14702.pem -> /usr/share/ca-certificates/14702.pem
	I0719 16:30:09.148706    2863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0719 16:30:09.155869    2863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0719 16:30:09.163091    2863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0719 16:30:09.169987    2863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0719 16:30:09.176829    2863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0719 16:30:09.183645    2863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0719 16:30:09.190745    2863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0719 16:30:09.197619    2863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0719 16:30:09.204074    2863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0719 16:30:09.211373    2863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/1470.pem --> /usr/share/ca-certificates/1470.pem (1338 bytes)
	I0719 16:30:09.218636    2863 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15585-1056/.minikube/files/etc/ssl/certs/14702.pem --> /usr/share/ca-certificates/14702.pem (1708 bytes)
	I0719 16:30:09.225265    2863 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0719 16:30:09.230104    2863 ssh_runner.go:195] Run: openssl version
	I0719 16:30:09.231970    2863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1470.pem && ln -fs /usr/share/ca-certificates/1470.pem /etc/ssl/certs/1470.pem"
	I0719 16:30:09.235310    2863 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1470.pem
	I0719 16:30:09.236943    2863 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jul 19 23:25 /usr/share/ca-certificates/1470.pem
	I0719 16:30:09.236960    2863 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1470.pem
	I0719 16:30:09.238693    2863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1470.pem /etc/ssl/certs/51391683.0"
	I0719 16:30:09.241648    2863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14702.pem && ln -fs /usr/share/ca-certificates/14702.pem /etc/ssl/certs/14702.pem"
	I0719 16:30:09.244511    2863 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14702.pem
	I0719 16:30:09.246024    2863 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jul 19 23:25 /usr/share/ca-certificates/14702.pem
	I0719 16:30:09.246044    2863 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14702.pem
	I0719 16:30:09.247732    2863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14702.pem /etc/ssl/certs/3ec20f2e.0"
	I0719 16:30:09.251030    2863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0719 16:30:09.253994    2863 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0719 16:30:09.255441    2863 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jul 19 22:52 /usr/share/ca-certificates/minikubeCA.pem
	I0719 16:30:09.255463    2863 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0719 16:30:09.257308    2863 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0719 16:30:09.260202    2863 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0719 16:30:09.261449    2863 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0719 16:30:09.261476    2863 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-442000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy
-442000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:30:09.261544    2863 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0719 16:30:09.266965    2863 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0719 16:30:09.269959    2863 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0719 16:30:09.272572    2863 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0719 16:30:09.275609    2863 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0719 16:30:09.275622    2863 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0719 16:30:09.301386    2863 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0719 16:30:09.301500    2863 kubeadm.go:322] [preflight] Running pre-flight checks
	I0719 16:30:09.387797    2863 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0719 16:30:09.387851    2863 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0719 16:30:09.387909    2863 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0719 16:30:09.435905    2863 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0719 16:30:09.437131    2863 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0719 16:30:09.437157    2863 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0719 16:30:09.513372    2863 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0719 16:30:09.521540    2863 out.go:204]   - Generating certificates and keys ...
	I0719 16:30:09.521573    2863 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0719 16:30:09.521643    2863 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0719 16:30:09.597520    2863 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0719 16:30:09.802197    2863 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0719 16:30:09.921423    2863 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0719 16:30:09.960686    2863 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0719 16:30:09.993309    2863 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0719 16:30:09.993386    2863 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-442000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0719 16:30:10.096074    2863 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0719 16:30:10.096143    2863 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-442000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0719 16:30:10.242324    2863 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0719 16:30:10.546767    2863 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0719 16:30:10.618682    2863 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0719 16:30:10.618738    2863 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0719 16:30:10.729762    2863 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0719 16:30:10.861946    2863 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0719 16:30:10.989892    2863 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0719 16:30:11.109396    2863 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0719 16:30:11.109856    2863 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0719 16:30:11.116153    2863 out.go:204]   - Booting up control plane ...
	I0719 16:30:11.116246    2863 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0719 16:30:11.116316    2863 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0719 16:30:11.116353    2863 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0719 16:30:11.116390    2863 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0719 16:30:11.116469    2863 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0719 16:30:22.622948    2863 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.506053 seconds
	I0719 16:30:22.623236    2863 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0719 16:30:22.646823    2863 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0719 16:30:23.162416    2863 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0719 16:30:23.162516    2863 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-442000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0719 16:30:23.669097    2863 kubeadm.go:322] [bootstrap-token] Using token: 5wua1y.2p5sbtz3aaqfglwi
	I0719 16:30:23.674971    2863 out.go:204]   - Configuring RBAC rules ...
	I0719 16:30:23.675047    2863 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0719 16:30:23.675115    2863 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0719 16:30:23.681190    2863 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0719 16:30:23.682195    2863 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0719 16:30:23.683285    2863 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0719 16:30:23.684476    2863 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0719 16:30:23.688436    2863 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0719 16:30:23.873489    2863 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0719 16:30:24.079467    2863 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0719 16:30:24.080039    2863 kubeadm.go:322] 
	I0719 16:30:24.080081    2863 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0719 16:30:24.080086    2863 kubeadm.go:322] 
	I0719 16:30:24.080147    2863 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0719 16:30:24.080155    2863 kubeadm.go:322] 
	I0719 16:30:24.080170    2863 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0719 16:30:24.080227    2863 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0719 16:30:24.080262    2863 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0719 16:30:24.080267    2863 kubeadm.go:322] 
	I0719 16:30:24.080311    2863 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0719 16:30:24.080387    2863 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0719 16:30:24.080447    2863 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0719 16:30:24.080451    2863 kubeadm.go:322] 
	I0719 16:30:24.080511    2863 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0719 16:30:24.080566    2863 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0719 16:30:24.080573    2863 kubeadm.go:322] 
	I0719 16:30:24.080643    2863 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 5wua1y.2p5sbtz3aaqfglwi \
	I0719 16:30:24.080734    2863 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ac94190d213987345f5634b8d3500a45c06dcf10dacd514b79727b35e18f9f90 \
	I0719 16:30:24.080752    2863 kubeadm.go:322]     --control-plane 
	I0719 16:30:24.080756    2863 kubeadm.go:322] 
	I0719 16:30:24.080822    2863 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0719 16:30:24.080826    2863 kubeadm.go:322] 
	I0719 16:30:24.080881    2863 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 5wua1y.2p5sbtz3aaqfglwi \
	I0719 16:30:24.080955    2863 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:ac94190d213987345f5634b8d3500a45c06dcf10dacd514b79727b35e18f9f90 
	I0719 16:30:24.081128    2863 kubeadm.go:322] W0719 23:30:09.602684    1410 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0719 16:30:24.081275    2863 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0719 16:30:24.081382    2863 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.4. Latest validated version: 19.03
	I0719 16:30:24.081464    2863 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0719 16:30:24.081564    2863 kubeadm.go:322] W0719 23:30:11.414693    1410 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0719 16:30:24.081657    2863 kubeadm.go:322] W0719 23:30:11.415550    1410 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0719 16:30:24.081666    2863 cni.go:84] Creating CNI manager for ""
	I0719 16:30:24.081675    2863 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0719 16:30:24.081687    2863 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0719 16:30:24.081772    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:24.081773    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.0 minikube.k8s.io/commit=fd270bde4b7e946c995f2329996587ae45fe53d4 minikube.k8s.io/name=ingress-addon-legacy-442000 minikube.k8s.io/updated_at=2023_07_19T16_30_24_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:24.086986    2863 ops.go:34] apiserver oom_adj: -16
	I0719 16:30:24.156968    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:24.692817    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:25.192907    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:25.692843    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:26.192835    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:26.692817    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:27.192790    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:27.692749    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:28.192747    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:28.692856    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:29.192482    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:29.692707    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:30.192775    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:30.692553    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:31.192490    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:31.692757    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:32.192487    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:32.692713    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:33.192618    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:33.692705    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:34.192670    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:34.692600    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:35.192620    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:35.692651    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:36.192569    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:36.692630    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:37.192547    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:37.692605    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:38.192287    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:38.692366    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:39.191282    2863 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0719 16:30:39.225663    2863 kubeadm.go:1081] duration metric: took 15.144232334s to wait for elevateKubeSystemPrivileges.
	I0719 16:30:39.225676    2863 kubeadm.go:406] StartCluster complete in 29.964735333s
	I0719 16:30:39.225685    2863 settings.go:142] acquiring lock: {Name:mk58631521ffd49c3231a31589bcae3549c3b53c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:30:39.225765    2863 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:30:39.226130    2863 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/kubeconfig: {Name:mk508b0ad49e7803e8fd5dcb96b45d1248a097b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:30:39.226319    2863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0719 16:30:39.226396    2863 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0719 16:30:39.226443    2863 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-442000"
	I0719 16:30:39.226451    2863 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-442000"
	I0719 16:30:39.226463    2863 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-442000"
	I0719 16:30:39.226476    2863 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-442000"
	I0719 16:30:39.226479    2863 host.go:66] Checking if "ingress-addon-legacy-442000" exists ...
	I0719 16:30:39.226567    2863 kapi.go:59] client config for ingress-addon-legacy-442000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/client.key", CAFile:"/Users/jenkins/minikube-integration/15585-1056/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1060ea010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 16:30:39.226602    2863 config.go:182] Loaded profile config "ingress-addon-legacy-442000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0719 16:30:39.226940    2863 cert_rotation.go:137] Starting client certificate rotation controller
	I0719 16:30:39.227649    2863 kapi.go:59] client config for ingress-addon-legacy-442000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/client.key", CAFile:"/Users/jenkins/minikube-integration/15585-1056/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1060ea010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 16:30:39.232500    2863 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 16:30:39.236495    2863 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 16:30:39.236501    2863 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0719 16:30:39.236509    2863 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/ingress-addon-legacy-442000/id_rsa Username:docker}
	I0719 16:30:39.240295    2863 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-442000"
	I0719 16:30:39.240312    2863 host.go:66] Checking if "ingress-addon-legacy-442000" exists ...
	I0719 16:30:39.240979    2863 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0719 16:30:39.240986    2863 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0719 16:30:39.240992    2863 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/ingress-addon-legacy-442000/id_rsa Username:docker}
	I0719 16:30:39.275073    2863 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0719 16:30:39.319065    2863 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0719 16:30:39.352730    2863 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0719 16:30:39.561143    2863 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0719 16:30:39.630769    2863 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0719 16:30:39.640717    2863 addons.go:502] enable addons completed in 414.350708ms: enabled=[default-storageclass storage-provisioner]
	I0719 16:30:39.751392    2863 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-442000" context rescaled to 1 replicas
	I0719 16:30:39.751413    2863 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:30:39.752987    2863 out.go:177] * Verifying Kubernetes components...
	I0719 16:30:39.759775    2863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 16:30:39.766274    2863 kapi.go:59] client config for ingress-addon-legacy-442000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/client.key", CAFile:"/Users/jenkins/minikube-integration/15585-1056/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1060ea010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0719 16:30:39.766431    2863 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-442000" to be "Ready" ...
	I0719 16:30:39.769437    2863 node_ready.go:49] node "ingress-addon-legacy-442000" has status "Ready":"True"
	I0719 16:30:39.769443    2863 node_ready.go:38] duration metric: took 3.004417ms waiting for node "ingress-addon-legacy-442000" to be "Ready" ...
	I0719 16:30:39.769446    2863 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 16:30:39.774093    2863 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-8pvp9" in "kube-system" namespace to be "Ready" ...
	I0719 16:30:41.800747    2863 pod_ready.go:102] pod "coredns-66bff467f8-8pvp9" in "kube-system" namespace has status "Ready":"False"
	I0719 16:30:44.300938    2863 pod_ready.go:102] pod "coredns-66bff467f8-8pvp9" in "kube-system" namespace has status "Ready":"False"
	I0719 16:30:44.800196    2863 pod_ready.go:92] pod "coredns-66bff467f8-8pvp9" in "kube-system" namespace has status "Ready":"True"
	I0719 16:30:44.800234    2863 pod_ready.go:81] duration metric: took 5.026214583s waiting for pod "coredns-66bff467f8-8pvp9" in "kube-system" namespace to be "Ready" ...
	I0719 16:30:44.800253    2863 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-442000" in "kube-system" namespace to be "Ready" ...
	I0719 16:30:44.807769    2863 pod_ready.go:92] pod "etcd-ingress-addon-legacy-442000" in "kube-system" namespace has status "Ready":"True"
	I0719 16:30:44.807789    2863 pod_ready.go:81] duration metric: took 7.526125ms waiting for pod "etcd-ingress-addon-legacy-442000" in "kube-system" namespace to be "Ready" ...
	I0719 16:30:44.807806    2863 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-442000" in "kube-system" namespace to be "Ready" ...
	I0719 16:30:44.817584    2863 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-442000" in "kube-system" namespace has status "Ready":"True"
	I0719 16:30:44.817607    2863 pod_ready.go:81] duration metric: took 9.792208ms waiting for pod "kube-apiserver-ingress-addon-legacy-442000" in "kube-system" namespace to be "Ready" ...
	I0719 16:30:44.817620    2863 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-442000" in "kube-system" namespace to be "Ready" ...
	I0719 16:30:44.823123    2863 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-442000" in "kube-system" namespace has status "Ready":"True"
	I0719 16:30:44.823134    2863 pod_ready.go:81] duration metric: took 5.506458ms waiting for pod "kube-controller-manager-ingress-addon-legacy-442000" in "kube-system" namespace to be "Ready" ...
	I0719 16:30:44.823143    2863 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ttt5c" in "kube-system" namespace to be "Ready" ...
	I0719 16:30:44.827954    2863 pod_ready.go:92] pod "kube-proxy-ttt5c" in "kube-system" namespace has status "Ready":"True"
	I0719 16:30:44.827964    2863 pod_ready.go:81] duration metric: took 4.814792ms waiting for pod "kube-proxy-ttt5c" in "kube-system" namespace to be "Ready" ...
	I0719 16:30:44.827972    2863 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-442000" in "kube-system" namespace to be "Ready" ...
	I0719 16:30:44.990539    2863 request.go:628] Waited for 162.476209ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-442000
	I0719 16:30:45.190583    2863 request.go:628] Waited for 195.435375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-442000
	I0719 16:30:45.196685    2863 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-442000" in "kube-system" namespace has status "Ready":"True"
	I0719 16:30:45.196715    2863 pod_ready.go:81] duration metric: took 368.737708ms waiting for pod "kube-scheduler-ingress-addon-legacy-442000" in "kube-system" namespace to be "Ready" ...
	I0719 16:30:45.196742    2863 pod_ready.go:38] duration metric: took 5.427380959s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0719 16:30:45.196801    2863 api_server.go:52] waiting for apiserver process to appear ...
	I0719 16:30:45.197185    2863 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0719 16:30:45.215573    2863 api_server.go:72] duration metric: took 5.464230916s to wait for apiserver process to appear ...
	I0719 16:30:45.215590    2863 api_server.go:88] waiting for apiserver healthz status ...
	I0719 16:30:45.215608    2863 api_server.go:253] Checking apiserver healthz at https://192.168.105.6:8443/healthz ...
	I0719 16:30:45.224576    2863 api_server.go:279] https://192.168.105.6:8443/healthz returned 200:
	ok
	I0719 16:30:45.225709    2863 api_server.go:141] control plane version: v1.18.20
	I0719 16:30:45.225729    2863 api_server.go:131] duration metric: took 10.131625ms to wait for apiserver health ...
	I0719 16:30:45.225737    2863 system_pods.go:43] waiting for kube-system pods to appear ...
	I0719 16:30:45.390552    2863 request.go:628] Waited for 164.713166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0719 16:30:45.403510    2863 system_pods.go:59] 7 kube-system pods found
	I0719 16:30:45.403552    2863 system_pods.go:61] "coredns-66bff467f8-8pvp9" [a8d9cf7d-1355-4e84-85b0-9616997c72f1] Running
	I0719 16:30:45.403577    2863 system_pods.go:61] "etcd-ingress-addon-legacy-442000" [1c4b198f-c3ad-4ec6-8399-02774f02dc1f] Running
	I0719 16:30:45.403592    2863 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-442000" [db74d6f2-51c6-4cbe-af57-b2bfab503f7f] Running
	I0719 16:30:45.403604    2863 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-442000" [07c1a9be-7707-4615-a52e-43f174bd72c2] Running
	I0719 16:30:45.403614    2863 system_pods.go:61] "kube-proxy-ttt5c" [e278faf5-c7b1-4bed-8d61-1a0090991f9a] Running
	I0719 16:30:45.403624    2863 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-442000" [3c8c539d-7fbd-4992-90cd-f1e3a4c9e153] Running
	I0719 16:30:45.403635    2863 system_pods.go:61] "storage-provisioner" [24fc774e-0bca-42dd-b12f-d4069d77a813] Running
	I0719 16:30:45.403651    2863 system_pods.go:74] duration metric: took 177.906125ms to wait for pod list to return data ...
	I0719 16:30:45.403671    2863 default_sa.go:34] waiting for default service account to be created ...
	I0719 16:30:45.588559    2863 request.go:628] Waited for 184.760209ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/default/serviceaccounts
	I0719 16:30:45.593851    2863 default_sa.go:45] found service account: "default"
	I0719 16:30:45.593879    2863 default_sa.go:55] duration metric: took 190.199583ms for default service account to be created ...
	I0719 16:30:45.593894    2863 system_pods.go:116] waiting for k8s-apps to be running ...
	I0719 16:30:45.790554    2863 request.go:628] Waited for 196.540083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0719 16:30:45.803884    2863 system_pods.go:86] 7 kube-system pods found
	I0719 16:30:45.803919    2863 system_pods.go:89] "coredns-66bff467f8-8pvp9" [a8d9cf7d-1355-4e84-85b0-9616997c72f1] Running
	I0719 16:30:45.803933    2863 system_pods.go:89] "etcd-ingress-addon-legacy-442000" [1c4b198f-c3ad-4ec6-8399-02774f02dc1f] Running
	I0719 16:30:45.803943    2863 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-442000" [db74d6f2-51c6-4cbe-af57-b2bfab503f7f] Running
	I0719 16:30:45.803954    2863 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-442000" [07c1a9be-7707-4615-a52e-43f174bd72c2] Running
	I0719 16:30:45.803964    2863 system_pods.go:89] "kube-proxy-ttt5c" [e278faf5-c7b1-4bed-8d61-1a0090991f9a] Running
	I0719 16:30:45.803991    2863 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-442000" [3c8c539d-7fbd-4992-90cd-f1e3a4c9e153] Running
	I0719 16:30:45.804002    2863 system_pods.go:89] "storage-provisioner" [24fc774e-0bca-42dd-b12f-d4069d77a813] Running
	I0719 16:30:45.804015    2863 system_pods.go:126] duration metric: took 210.115875ms to wait for k8s-apps to be running ...
	I0719 16:30:45.804031    2863 system_svc.go:44] waiting for kubelet service to be running ....
	I0719 16:30:45.804284    2863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0719 16:30:45.820715    2863 system_svc.go:56] duration metric: took 16.683417ms WaitForService to wait for kubelet.
	I0719 16:30:45.820732    2863 kubeadm.go:581] duration metric: took 6.06940625s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0719 16:30:45.820752    2863 node_conditions.go:102] verifying NodePressure condition ...
	I0719 16:30:45.990518    2863 request.go:628] Waited for 169.695ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes
	I0719 16:30:45.997917    2863 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0719 16:30:45.997992    2863 node_conditions.go:123] node cpu capacity is 2
	I0719 16:30:45.998019    2863 node_conditions.go:105] duration metric: took 177.262833ms to run NodePressure ...
	I0719 16:30:45.998040    2863 start.go:228] waiting for startup goroutines ...
	I0719 16:30:45.998053    2863 start.go:233] waiting for cluster config update ...
	I0719 16:30:45.998088    2863 start.go:242] writing updated cluster config ...
	I0719 16:30:45.999335    2863 ssh_runner.go:195] Run: rm -f paused
	I0719 16:30:46.062927    2863 start.go:578] kubectl: 1.27.2, cluster: 1.18.20 (minor skew: 9)
	I0719 16:30:46.068181    2863 out.go:177] 
	W0719 16:30:46.073152    2863 out.go:239] ! /usr/local/bin/kubectl is version 1.27.2, which may have incompatibilities with Kubernetes 1.18.20.
	I0719 16:30:46.078119    2863 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0719 16:30:46.086214    2863 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-442000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Wed 2023-07-19 23:29:57 UTC, ends at Wed 2023-07-19 23:31:58 UTC. --
	Jul 19 23:31:33 ingress-addon-legacy-442000 dockerd[1075]: time="2023-07-19T23:31:33.532446703Z" level=info msg="shim disconnected" id=db2706931abfeec88dfa4611defe4e99c61b2dc7b0446f6efb3df5d6258a67ec namespace=moby
	Jul 19 23:31:33 ingress-addon-legacy-442000 dockerd[1075]: time="2023-07-19T23:31:33.532492536Z" level=warning msg="cleaning up after shim disconnected" id=db2706931abfeec88dfa4611defe4e99c61b2dc7b0446f6efb3df5d6258a67ec namespace=moby
	Jul 19 23:31:33 ingress-addon-legacy-442000 dockerd[1075]: time="2023-07-19T23:31:33.532509370Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 23:31:46 ingress-addon-legacy-442000 dockerd[1075]: time="2023-07-19T23:31:46.669287889Z" level=info msg="shim disconnected" id=8dff81323134262c93c4cf4f2efb52b9ad8b2181373b8220ee708b1b9d8c5c39 namespace=moby
	Jul 19 23:31:46 ingress-addon-legacy-442000 dockerd[1075]: time="2023-07-19T23:31:46.669442097Z" level=warning msg="cleaning up after shim disconnected" id=8dff81323134262c93c4cf4f2efb52b9ad8b2181373b8220ee708b1b9d8c5c39 namespace=moby
	Jul 19 23:31:46 ingress-addon-legacy-442000 dockerd[1075]: time="2023-07-19T23:31:46.669449263Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 23:31:46 ingress-addon-legacy-442000 dockerd[1069]: time="2023-07-19T23:31:46.669167557Z" level=info msg="ignoring event" container=8dff81323134262c93c4cf4f2efb52b9ad8b2181373b8220ee708b1b9d8c5c39 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 23:31:47 ingress-addon-legacy-442000 dockerd[1075]: time="2023-07-19T23:31:47.694711007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 23:31:47 ingress-addon-legacy-442000 dockerd[1075]: time="2023-07-19T23:31:47.695107172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 23:31:47 ingress-addon-legacy-442000 dockerd[1075]: time="2023-07-19T23:31:47.695130547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 23:31:47 ingress-addon-legacy-442000 dockerd[1075]: time="2023-07-19T23:31:47.695144755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 23:31:47 ingress-addon-legacy-442000 dockerd[1075]: time="2023-07-19T23:31:47.734399128Z" level=info msg="shim disconnected" id=05017d44d7a1623b98494cc2603087a6b7ad11d06f899422f12997d279db2d41 namespace=moby
	Jul 19 23:31:47 ingress-addon-legacy-442000 dockerd[1075]: time="2023-07-19T23:31:47.734540043Z" level=warning msg="cleaning up after shim disconnected" id=05017d44d7a1623b98494cc2603087a6b7ad11d06f899422f12997d279db2d41 namespace=moby
	Jul 19 23:31:47 ingress-addon-legacy-442000 dockerd[1069]: time="2023-07-19T23:31:47.734674251Z" level=info msg="ignoring event" container=05017d44d7a1623b98494cc2603087a6b7ad11d06f899422f12997d279db2d41 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 23:31:47 ingress-addon-legacy-442000 dockerd[1075]: time="2023-07-19T23:31:47.734825792Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 23:31:53 ingress-addon-legacy-442000 dockerd[1069]: time="2023-07-19T23:31:53.155844764Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=815fd547e991acd24c15eeb12ff3e51046a6ed60053e216b282b51d32b50a335
	Jul 19 23:31:53 ingress-addon-legacy-442000 dockerd[1069]: time="2023-07-19T23:31:53.160948615Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=815fd547e991acd24c15eeb12ff3e51046a6ed60053e216b282b51d32b50a335
	Jul 19 23:31:53 ingress-addon-legacy-442000 dockerd[1075]: time="2023-07-19T23:31:53.268239305Z" level=info msg="shim disconnected" id=815fd547e991acd24c15eeb12ff3e51046a6ed60053e216b282b51d32b50a335 namespace=moby
	Jul 19 23:31:53 ingress-addon-legacy-442000 dockerd[1075]: time="2023-07-19T23:31:53.268306638Z" level=warning msg="cleaning up after shim disconnected" id=815fd547e991acd24c15eeb12ff3e51046a6ed60053e216b282b51d32b50a335 namespace=moby
	Jul 19 23:31:53 ingress-addon-legacy-442000 dockerd[1075]: time="2023-07-19T23:31:53.268318638Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 23:31:53 ingress-addon-legacy-442000 dockerd[1069]: time="2023-07-19T23:31:53.268504762Z" level=info msg="ignoring event" container=815fd547e991acd24c15eeb12ff3e51046a6ed60053e216b282b51d32b50a335 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 23:31:53 ingress-addon-legacy-442000 dockerd[1069]: time="2023-07-19T23:31:53.304812254Z" level=info msg="ignoring event" container=1f653c390774f0c09a8dc13bd09369af1bdca742e5f11031f440d72629fb4557 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 19 23:31:53 ingress-addon-legacy-442000 dockerd[1075]: time="2023-07-19T23:31:53.304872420Z" level=info msg="shim disconnected" id=1f653c390774f0c09a8dc13bd09369af1bdca742e5f11031f440d72629fb4557 namespace=moby
	Jul 19 23:31:53 ingress-addon-legacy-442000 dockerd[1075]: time="2023-07-19T23:31:53.304900420Z" level=warning msg="cleaning up after shim disconnected" id=1f653c390774f0c09a8dc13bd09369af1bdca742e5f11031f440d72629fb4557 namespace=moby
	Jul 19 23:31:53 ingress-addon-legacy-442000 dockerd[1075]: time="2023-07-19T23:31:53.304906253Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	05017d44d7a16       13753a81eccfd                                                                                                      11 seconds ago       Exited              hello-world-app           2                   2b81bc35dc102
	f9621ae82f0c0       nginx@sha256:2d194184b067db3598771b4cf326cfe6ad5051937ba1132b8b7d4b0184e0d0a6                                      35 seconds ago       Running             nginx                     0                   c3b28a88cf6ce
	815fd547e991a       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   57 seconds ago       Exited              controller                0                   1f653c390774f
	a0a2a9e2aecfa       a883f7fc35610                                                                                                      About a minute ago   Exited              patch                     1                   0430dce64a04e
	b719710a047f5       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              create                    0                   5f4c9d9af5ff6
	a1e4f8645cbc3       gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944    About a minute ago   Running             storage-provisioner       0                   e439c35c74034
	cc1ffc13bb9bc       6e17ba78cf3eb                                                                                                      About a minute ago   Running             coredns                   0                   fa6d145eb1bae
	4f3214ec353f3       565297bc6f7d4                                                                                                      About a minute ago   Running             kube-proxy                0                   bc21aae173ed7
	b106c0b4bc062       095f37015706d                                                                                                      About a minute ago   Running             kube-scheduler            0                   d9bf92c758af1
	729d4ee1dd5ea       68a4fac29a865                                                                                                      About a minute ago   Running             kube-controller-manager   0                   0ec2590e49e8c
	b9b7d1d9b9b33       2694cf044d665                                                                                                      About a minute ago   Running             kube-apiserver            0                   8b40110f6a320
	7b4b2b23ede4b       ab707b0a0ea33                                                                                                      About a minute ago   Running             etcd                      0                   c92932846a449
	
	* 
	* ==> coredns [cc1ffc13bb9b] <==
	* [INFO] 172.17.0.1:41896 - 946 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000030417s
	[INFO] 172.17.0.1:50945 - 10 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00001275s
	[INFO] 172.17.0.1:41896 - 26346 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000024625s
	[INFO] 172.17.0.1:50945 - 41991 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000008417s
	[INFO] 172.17.0.1:41896 - 42705 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000043207s
	[INFO] 172.17.0.1:50945 - 43429 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000009125s
	[INFO] 172.17.0.1:41896 - 55025 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000086749s
	[INFO] 172.17.0.1:50945 - 36366 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000008667s
	[INFO] 172.17.0.1:50945 - 59781 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000007749s
	[INFO] 172.17.0.1:50945 - 11353 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000007417s
	[INFO] 172.17.0.1:50945 - 31944 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000009166s
	[INFO] 172.17.0.1:62261 - 1299 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000044458s
	[INFO] 172.17.0.1:63494 - 33459 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000025916s
	[INFO] 172.17.0.1:63494 - 22552 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000015624s
	[INFO] 172.17.0.1:62261 - 34398 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000015583s
	[INFO] 172.17.0.1:63494 - 14427 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000011042s
	[INFO] 172.17.0.1:62261 - 14468 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00000975s
	[INFO] 172.17.0.1:62261 - 18767 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000013249s
	[INFO] 172.17.0.1:63494 - 57265 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00001075s
	[INFO] 172.17.0.1:62261 - 58442 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000008625s
	[INFO] 172.17.0.1:63494 - 64086 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000008459s
	[INFO] 172.17.0.1:62261 - 8761 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000008417s
	[INFO] 172.17.0.1:63494 - 42450 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00001275s
	[INFO] 172.17.0.1:63494 - 7387 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000010916s
	[INFO] 172.17.0.1:62261 - 14596 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000014167s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-442000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-442000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd270bde4b7e946c995f2329996587ae45fe53d4
	                    minikube.k8s.io/name=ingress-addon-legacy-442000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_07_19T16_30_24_0700
	                    minikube.k8s.io/version=v1.31.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 19 Jul 2023 23:30:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-442000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 19 Jul 2023 23:31:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 19 Jul 2023 23:31:30 +0000   Wed, 19 Jul 2023 23:30:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 19 Jul 2023 23:31:30 +0000   Wed, 19 Jul 2023 23:30:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 19 Jul 2023 23:31:30 +0000   Wed, 19 Jul 2023 23:30:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 19 Jul 2023 23:31:30 +0000   Wed, 19 Jul 2023 23:30:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ingress-addon-legacy-442000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	System Info:
	  Machine ID:                 412e3d788a6d4b458cec1790b5311853
	  System UUID:                412e3d788a6d4b458cec1790b5311853
	  Boot ID:                    9b2c6342-3a6e-4603-b29d-b8096cb20241
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.4
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-8b8rl                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                 coredns-66bff467f8-8pvp9                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     79s
	  kube-system                 etcd-ingress-addon-legacy-442000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-apiserver-ingress-addon-legacy-442000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-442000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-proxy-ttt5c                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-scheduler-ingress-addon-legacy-442000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 88s   kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  88s   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  88s   kubelet     Node ingress-addon-legacy-442000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    88s   kubelet     Node ingress-addon-legacy-442000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s   kubelet     Node ingress-addon-legacy-442000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                88s   kubelet     Node ingress-addon-legacy-442000 status is now: NodeReady
	  Normal  Starting                 79s   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Jul19 23:29] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.662674] EINJ: EINJ table not found.
	[  +0.525771] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.044253] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000803] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.269105] systemd-fstab-generator[481]: Ignoring "noauto" for root device
	[  +0.075304] systemd-fstab-generator[492]: Ignoring "noauto" for root device
	[Jul19 23:30] systemd-fstab-generator[713]: Ignoring "noauto" for root device
	[  +0.202221] systemd-fstab-generator[749]: Ignoring "noauto" for root device
	[  +0.079341] systemd-fstab-generator[760]: Ignoring "noauto" for root device
	[  +0.088097] systemd-fstab-generator[773]: Ignoring "noauto" for root device
	[  +1.147332] kauditd_printk_skb: 17 callbacks suppressed
	[  +3.164971] systemd-fstab-generator[1062]: Ignoring "noauto" for root device
	[  +4.243111] systemd-fstab-generator[1524]: Ignoring "noauto" for root device
	[  +8.490584] kauditd_printk_skb: 68 callbacks suppressed
	[  +0.099351] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +5.673279] systemd-fstab-generator[2606]: Ignoring "noauto" for root device
	[ +15.930990] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.700600] kauditd_printk_skb: 9 callbacks suppressed
	[  +6.269417] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Jul19 23:31] kauditd_printk_skb: 5 callbacks suppressed
	[  +9.466091] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [7b4b2b23ede4] <==
	* raft2023/07/19 23:30:19 INFO: ed054832bd1917e1 became follower at term 0
	raft2023/07/19 23:30:19 INFO: newRaft ed054832bd1917e1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/07/19 23:30:19 INFO: ed054832bd1917e1 became follower at term 1
	raft2023/07/19 23:30:19 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-07-19 23:30:19.194625 W | auth: simple token is not cryptographically signed
	2023-07-19 23:30:19.494833 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-07-19 23:30:19.499436 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-07-19 23:30:19.502675 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-07-19 23:30:19.503023 I | etcdserver: ed054832bd1917e1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-07-19 23:30:19.503105 I | embed: listening for peers on 192.168.105.6:2380
	raft2023/07/19 23:30:19 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-07-19 23:30:19.503304 I | etcdserver/membership: added member ed054832bd1917e1 [https://192.168.105.6:2380] to cluster 45a39c2c59b0edf4
	raft2023/07/19 23:30:19 INFO: ed054832bd1917e1 is starting a new election at term 1
	raft2023/07/19 23:30:19 INFO: ed054832bd1917e1 became candidate at term 2
	raft2023/07/19 23:30:19 INFO: ed054832bd1917e1 received MsgVoteResp from ed054832bd1917e1 at term 2
	raft2023/07/19 23:30:19 INFO: ed054832bd1917e1 became leader at term 2
	raft2023/07/19 23:30:19 INFO: raft.node: ed054832bd1917e1 elected leader ed054832bd1917e1 at term 2
	2023-07-19 23:30:19.835345 I | etcdserver: setting up the initial cluster version to 3.4
	2023-07-19 23:30:19.836155 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-07-19 23:30:19.836214 I | etcdserver/api: enabled capabilities for version 3.4
	2023-07-19 23:30:19.836262 I | etcdserver: published {Name:ingress-addon-legacy-442000 ClientURLs:[https://192.168.105.6:2379]} to cluster 45a39c2c59b0edf4
	2023-07-19 23:30:19.836351 I | embed: ready to serve client requests
	2023-07-19 23:30:19.836983 I | embed: serving client requests on 192.168.105.6:2379
	2023-07-19 23:30:19.837176 I | embed: ready to serve client requests
	2023-07-19 23:30:19.837646 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> kernel <==
	*  23:31:58 up 2 min,  0 users,  load average: 0.96, 0.35, 0.12
	Linux ingress-addon-legacy-442000 5.10.57 #1 SMP PREEMPT Fri Jul 14 22:49:12 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [b9b7d1d9b9b3] <==
	* I0719 23:30:21.444297       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0719 23:30:21.444339       1 cache.go:39] Caches are synced for autoregister controller
	I0719 23:30:21.444355       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0719 23:30:21.444365       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0719 23:30:21.444371       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0719 23:30:22.342154       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0719 23:30:22.342406       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0719 23:30:22.357641       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0719 23:30:22.365547       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0719 23:30:22.365573       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0719 23:30:22.499257       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0719 23:30:22.511601       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0719 23:30:22.617507       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.105.6]
	I0719 23:30:22.617962       1 controller.go:609] quota admission added evaluator for: endpoints
	I0719 23:30:22.619722       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0719 23:30:23.650215       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0719 23:30:24.168237       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0719 23:30:24.375374       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0719 23:30:30.575060       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0719 23:30:39.151194       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0719 23:30:39.653247       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0719 23:30:46.459243       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0719 23:31:20.633765       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0719 23:31:51.154230       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	E0719 23:31:52.641014       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [729d4ee1dd5e] <==
	* E0719 23:30:39.321420       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I0719 23:30:39.460523       1 shared_informer.go:230] Caches are synced for HPA 
	I0719 23:30:39.581671       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0719 23:30:39.593791       1 shared_informer.go:230] Caches are synced for attach detach 
	I0719 23:30:39.600263       1 shared_informer.go:230] Caches are synced for PVC protection 
	I0719 23:30:39.603919       1 shared_informer.go:230] Caches are synced for expand 
	I0719 23:30:39.639488       1 shared_informer.go:230] Caches are synced for stateful set 
	I0719 23:30:39.651894       1 shared_informer.go:230] Caches are synced for deployment 
	I0719 23:30:39.657862       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"28d206e8-08e9-459c-a85e-90b65ad98aa6", APIVersion:"apps/v1", ResourceVersion:"329", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-8pvp9
	I0719 23:30:39.660226       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"78f162b9-4ac3-405f-aa8b-bd3387977147", APIVersion:"apps/v1", ResourceVersion:"326", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I0719 23:30:39.671202       1 shared_informer.go:230] Caches are synced for disruption 
	I0719 23:30:39.671217       1 disruption.go:339] Sending events to api server.
	I0719 23:30:39.702388       1 shared_informer.go:230] Caches are synced for resource quota 
	I0719 23:30:39.710959       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0719 23:30:39.713921       1 shared_informer.go:230] Caches are synced for resource quota 
	I0719 23:30:39.756198       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0719 23:30:39.756208       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0719 23:30:46.454523       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"03b71be1-2e02-4d75-96c7-f9cb4c18fea9", APIVersion:"apps/v1", ResourceVersion:"413", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0719 23:30:46.465334       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"1b99f1c0-d9dc-4ce9-84b0-0b70d67cc387", APIVersion:"apps/v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-7jsjb
	I0719 23:30:46.475279       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"38518830-f707-4b6e-af79-96756f8df99e", APIVersion:"batch/v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-z97w8
	I0719 23:30:46.490140       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"3727d782-268b-4751-9bd1-926e54be8c56", APIVersion:"batch/v1", ResourceVersion:"431", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-gfvpt
	I0719 23:30:49.880840       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"38518830-f707-4b6e-af79-96756f8df99e", APIVersion:"batch/v1", ResourceVersion:"428", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0719 23:30:50.904303       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"3727d782-268b-4751-9bd1-926e54be8c56", APIVersion:"batch/v1", ResourceVersion:"440", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0719 23:31:29.929504       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"63fc94c1-65e1-400b-9fe2-264e95b05ab8", APIVersion:"apps/v1", ResourceVersion:"564", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0719 23:31:29.937919       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"d5bd7d25-50ff-40ad-8201-571cb268f07e", APIVersion:"apps/v1", ResourceVersion:"565", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-8b8rl
	
	* 
	* ==> kube-proxy [4f3214ec353f] <==
	* W0719 23:30:39.793804       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0719 23:30:39.798931       1 node.go:136] Successfully retrieved node IP: 192.168.105.6
	I0719 23:30:39.798946       1 server_others.go:186] Using iptables Proxier.
	I0719 23:30:39.799099       1 server.go:583] Version: v1.18.20
	I0719 23:30:39.799539       1 config.go:315] Starting service config controller
	I0719 23:30:39.799573       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0719 23:30:39.799636       1 config.go:133] Starting endpoints config controller
	I0719 23:30:39.799658       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0719 23:30:39.900476       1 shared_informer.go:230] Caches are synced for service config 
	I0719 23:30:39.900571       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [b106c0b4bc06] <==
	* W0719 23:30:21.387025       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0719 23:30:21.387028       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0719 23:30:21.397193       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0719 23:30:21.397293       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0719 23:30:21.398893       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 23:30:21.398907       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 23:30:21.399320       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0719 23:30:21.399505       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0719 23:30:21.400865       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0719 23:30:21.400951       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0719 23:30:21.401183       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0719 23:30:21.401211       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 23:30:21.401257       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 23:30:21.401276       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 23:30:21.401313       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0719 23:30:21.401356       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0719 23:30:21.401381       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0719 23:30:21.401421       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0719 23:30:21.401458       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0719 23:30:21.401383       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0719 23:30:22.257472       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0719 23:30:22.270921       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0719 23:30:22.323728       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0719 23:30:22.428408       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0719 23:30:22.599629       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Wed 2023-07-19 23:29:57 UTC, ends at Wed 2023-07-19 23:31:58 UTC. --
	Jul 19 23:31:35 ingress-addon-legacy-442000 kubelet[2612]: E0719 23:31:35.488529    2612 pod_workers.go:191] Error syncing pod 7f54a8d8-4f47-43ed-9d21-e2fe5a75eca8 ("hello-world-app-5f5d8b66bb-8b8rl_default(7f54a8d8-4f47-43ed-9d21-e2fe5a75eca8)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-8b8rl_default(7f54a8d8-4f47-43ed-9d21-e2fe5a75eca8)"
	Jul 19 23:31:39 ingress-addon-legacy-442000 kubelet[2612]: I0719 23:31:39.626024    2612 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 19433c78c66197140acb404f8d04b4072bc71cf9467eacca861feffc246debc5
	Jul 19 23:31:39 ingress-addon-legacy-442000 kubelet[2612]: E0719 23:31:39.629014    2612 pod_workers.go:191] Error syncing pod fb1b8a8b-ea7b-420f-91c9-aa47b0535a2d ("kube-ingress-dns-minikube_kube-system(fb1b8a8b-ea7b-420f-91c9-aa47b0535a2d)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(fb1b8a8b-ea7b-420f-91c9-aa47b0535a2d)"
	Jul 19 23:31:45 ingress-addon-legacy-442000 kubelet[2612]: I0719 23:31:45.411988    2612 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-2whcm" (UniqueName: "kubernetes.io/secret/fb1b8a8b-ea7b-420f-91c9-aa47b0535a2d-minikube-ingress-dns-token-2whcm") pod "fb1b8a8b-ea7b-420f-91c9-aa47b0535a2d" (UID: "fb1b8a8b-ea7b-420f-91c9-aa47b0535a2d")
	Jul 19 23:31:45 ingress-addon-legacy-442000 kubelet[2612]: I0719 23:31:45.414336    2612 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb1b8a8b-ea7b-420f-91c9-aa47b0535a2d-minikube-ingress-dns-token-2whcm" (OuterVolumeSpecName: "minikube-ingress-dns-token-2whcm") pod "fb1b8a8b-ea7b-420f-91c9-aa47b0535a2d" (UID: "fb1b8a8b-ea7b-420f-91c9-aa47b0535a2d"). InnerVolumeSpecName "minikube-ingress-dns-token-2whcm". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 19 23:31:45 ingress-addon-legacy-442000 kubelet[2612]: I0719 23:31:45.516344    2612 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-2whcm" (UniqueName: "kubernetes.io/secret/fb1b8a8b-ea7b-420f-91c9-aa47b0535a2d-minikube-ingress-dns-token-2whcm") on node "ingress-addon-legacy-442000" DevicePath ""
	Jul 19 23:31:47 ingress-addon-legacy-442000 kubelet[2612]: I0719 23:31:47.627676    2612 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: db2706931abfeec88dfa4611defe4e99c61b2dc7b0446f6efb3df5d6258a67ec
	Jul 19 23:31:47 ingress-addon-legacy-442000 kubelet[2612]: W0719 23:31:47.687577    2612 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-8b8rl through plugin: invalid network status for
	Jul 19 23:31:47 ingress-addon-legacy-442000 kubelet[2612]: I0719 23:31:47.732469    2612 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 19433c78c66197140acb404f8d04b4072bc71cf9467eacca861feffc246debc5
	Jul 19 23:31:47 ingress-addon-legacy-442000 kubelet[2612]: W0719 23:31:47.749932    2612 container.go:412] Failed to create summary reader for "/kubepods/besteffort/pod7f54a8d8-4f47-43ed-9d21-e2fe5a75eca8/05017d44d7a1623b98494cc2603087a6b7ad11d06f899422f12997d279db2d41": none of the resources are being tracked.
	Jul 19 23:31:48 ingress-addon-legacy-442000 kubelet[2612]: W0719 23:31:48.752588    2612 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-8b8rl through plugin: invalid network status for
	Jul 19 23:31:48 ingress-addon-legacy-442000 kubelet[2612]: I0719 23:31:48.760010    2612 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: db2706931abfeec88dfa4611defe4e99c61b2dc7b0446f6efb3df5d6258a67ec
	Jul 19 23:31:48 ingress-addon-legacy-442000 kubelet[2612]: I0719 23:31:48.760459    2612 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 05017d44d7a1623b98494cc2603087a6b7ad11d06f899422f12997d279db2d41
	Jul 19 23:31:48 ingress-addon-legacy-442000 kubelet[2612]: E0719 23:31:48.760955    2612 pod_workers.go:191] Error syncing pod 7f54a8d8-4f47-43ed-9d21-e2fe5a75eca8 ("hello-world-app-5f5d8b66bb-8b8rl_default(7f54a8d8-4f47-43ed-9d21-e2fe5a75eca8)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-8b8rl_default(7f54a8d8-4f47-43ed-9d21-e2fe5a75eca8)"
	Jul 19 23:31:49 ingress-addon-legacy-442000 kubelet[2612]: W0719 23:31:49.776388    2612 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-8b8rl through plugin: invalid network status for
	Jul 19 23:31:51 ingress-addon-legacy-442000 kubelet[2612]: E0719 23:31:51.152504    2612 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-7jsjb.177368cc21e68334", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-7jsjb", UID:"32f5150f-4763-4bc4-981e-010d01e3b42e", APIVersion:"v1", ResourceVersion:"422", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-442000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1263a79c8ab3d34, ext:86998267916, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1263a79c8ab3d34, ext:86998267916, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-7jsjb.177368cc21e68334" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 19 23:31:51 ingress-addon-legacy-442000 kubelet[2612]: E0719 23:31:51.156260    2612 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-7jsjb.177368cc21e68334", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-7jsjb", UID:"32f5150f-4763-4bc4-981e-010d01e3b42e", APIVersion:"v1", ResourceVersion:"422", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-442000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1263a79c8ab3d34, ext:86998267916, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1263a79c9129fde, ext:87005043424, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-7jsjb.177368cc21e68334" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jul 19 23:31:53 ingress-addon-legacy-442000 kubelet[2612]: W0719 23:31:53.841336    2612 pod_container_deletor.go:77] Container "1f653c390774f0c09a8dc13bd09369af1bdca742e5f11031f440d72629fb4557" not found in pod's containers
	Jul 19 23:31:55 ingress-addon-legacy-442000 kubelet[2612]: I0719 23:31:55.341800    2612 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/32f5150f-4763-4bc4-981e-010d01e3b42e-webhook-cert") pod "32f5150f-4763-4bc4-981e-010d01e3b42e" (UID: "32f5150f-4763-4bc4-981e-010d01e3b42e")
	Jul 19 23:31:55 ingress-addon-legacy-442000 kubelet[2612]: I0719 23:31:55.341943    2612 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-94jgv" (UniqueName: "kubernetes.io/secret/32f5150f-4763-4bc4-981e-010d01e3b42e-ingress-nginx-token-94jgv") pod "32f5150f-4763-4bc4-981e-010d01e3b42e" (UID: "32f5150f-4763-4bc4-981e-010d01e3b42e")
	Jul 19 23:31:55 ingress-addon-legacy-442000 kubelet[2612]: I0719 23:31:55.353082    2612 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32f5150f-4763-4bc4-981e-010d01e3b42e-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "32f5150f-4763-4bc4-981e-010d01e3b42e" (UID: "32f5150f-4763-4bc4-981e-010d01e3b42e"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 19 23:31:55 ingress-addon-legacy-442000 kubelet[2612]: I0719 23:31:55.355077    2612 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32f5150f-4763-4bc4-981e-010d01e3b42e-ingress-nginx-token-94jgv" (OuterVolumeSpecName: "ingress-nginx-token-94jgv") pod "32f5150f-4763-4bc4-981e-010d01e3b42e" (UID: "32f5150f-4763-4bc4-981e-010d01e3b42e"). InnerVolumeSpecName "ingress-nginx-token-94jgv". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jul 19 23:31:55 ingress-addon-legacy-442000 kubelet[2612]: I0719 23:31:55.443151    2612 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/32f5150f-4763-4bc4-981e-010d01e3b42e-webhook-cert") on node "ingress-addon-legacy-442000" DevicePath ""
	Jul 19 23:31:55 ingress-addon-legacy-442000 kubelet[2612]: I0719 23:31:55.443223    2612 reconciler.go:319] Volume detached for volume "ingress-nginx-token-94jgv" (UniqueName: "kubernetes.io/secret/32f5150f-4763-4bc4-981e-010d01e3b42e-ingress-nginx-token-94jgv") on node "ingress-addon-legacy-442000" DevicePath ""
	Jul 19 23:31:56 ingress-addon-legacy-442000 kubelet[2612]: W0719 23:31:56.657779    2612 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/32f5150f-4763-4bc4-981e-010d01e3b42e/volumes" does not exist
	
	* 
	* ==> storage-provisioner [a1e4f8645cbc] <==
	* I0719 23:30:42.798446       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0719 23:30:42.802786       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0719 23:30:42.802810       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0719 23:30:42.805063       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0719 23:30:42.805668       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-442000_9f186878-39d4-4451-9c7d-bbff8293b292!
	I0719 23:30:42.807332       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"025bbd3a-4ed9-4d44-95c2-afc29090e758", APIVersion:"v1", ResourceVersion:"372", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-442000_9f186878-39d4-4451-9c7d-bbff8293b292 became leader
	I0719 23:30:42.906550       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-442000_9f186878-39d4-4451-9c7d-bbff8293b292!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-442000 -n ingress-addon-legacy-442000
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-442000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (56.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.29s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-976000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-976000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.2141315s)

                                                
                                                
-- stdout --
	* [mount-start-1-976000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-976000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-976000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-976000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-976000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-976000 -n mount-start-1-976000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-976000 -n mount-start-1-976000: exit status 7 (70.126834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-976000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-992000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
E0719 16:34:17.560354    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/functional-001000/client.crt: no such file or directory
multinode_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-992000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.876017583s)

                                                
                                                
-- stdout --
	* [multinode-992000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-992000 in cluster multinode-992000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-992000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:34:10.240288    3196 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:34:10.240397    3196 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:34:10.240401    3196 out.go:309] Setting ErrFile to fd 2...
	I0719 16:34:10.240404    3196 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:34:10.240507    3196 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:34:10.241460    3196 out.go:303] Setting JSON to false
	I0719 16:34:10.256491    3196 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3821,"bootTime":1689805829,"procs":407,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:34:10.256557    3196 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:34:10.262155    3196 out.go:177] * [multinode-992000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:34:10.270134    3196 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:34:10.274188    3196 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:34:10.270163    3196 notify.go:220] Checking for updates...
	I0719 16:34:10.277274    3196 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:34:10.280129    3196 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:34:10.283187    3196 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:34:10.286186    3196 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:34:10.289221    3196 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:34:10.293179    3196 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 16:34:10.300171    3196 start.go:298] selected driver: qemu2
	I0719 16:34:10.300177    3196 start.go:880] validating driver "qemu2" against <nil>
	I0719 16:34:10.300183    3196 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:34:10.302089    3196 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0719 16:34:10.305192    3196 out.go:177] * Automatically selected the socket_vmnet network
	I0719 16:34:10.308553    3196 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 16:34:10.308584    3196 cni.go:84] Creating CNI manager for ""
	I0719 16:34:10.308587    3196 cni.go:136] 0 nodes found, recommending kindnet
	I0719 16:34:10.308591    3196 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0719 16:34:10.308598    3196 start_flags.go:319] config:
	{Name:multinode-992000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-992000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPl
ugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:34:10.313024    3196 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:34:10.320200    3196 out.go:177] * Starting control plane node multinode-992000 in cluster multinode-992000
	I0719 16:34:10.324195    3196 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 16:34:10.324241    3196 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 16:34:10.324252    3196 cache.go:57] Caching tarball of preloaded images
	I0719 16:34:10.324331    3196 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 16:34:10.324337    3196 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0719 16:34:10.324526    3196 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/multinode-992000/config.json ...
	I0719 16:34:10.324542    3196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/multinode-992000/config.json: {Name:mk9dfdf5c8c34a0547f70e5dc3b2fafd3ad6f368 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:34:10.324755    3196 start.go:365] acquiring machines lock for multinode-992000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:34:10.324787    3196 start.go:369] acquired machines lock for "multinode-992000" in 25.875µs
	I0719 16:34:10.324798    3196 start.go:93] Provisioning new machine with config: &{Name:multinode-992000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-
992000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:34:10.324836    3196 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:34:10.332242    3196 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 16:34:10.348904    3196 start.go:159] libmachine.API.Create for "multinode-992000" (driver="qemu2")
	I0719 16:34:10.348927    3196 client.go:168] LocalClient.Create starting
	I0719 16:34:10.348990    3196 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:34:10.349016    3196 main.go:141] libmachine: Decoding PEM data...
	I0719 16:34:10.349031    3196 main.go:141] libmachine: Parsing certificate...
	I0719 16:34:10.349093    3196 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:34:10.349109    3196 main.go:141] libmachine: Decoding PEM data...
	I0719 16:34:10.349117    3196 main.go:141] libmachine: Parsing certificate...
	I0719 16:34:10.349489    3196 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:34:10.460668    3196 main.go:141] libmachine: Creating SSH key...
	I0719 16:34:10.534754    3196 main.go:141] libmachine: Creating Disk image...
	I0719 16:34:10.534759    3196 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:34:10.534885    3196 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/disk.qcow2
	I0719 16:34:10.543371    3196 main.go:141] libmachine: STDOUT: 
	I0719 16:34:10.543383    3196 main.go:141] libmachine: STDERR: 
	I0719 16:34:10.543440    3196 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/disk.qcow2 +20000M
	I0719 16:34:10.550534    3196 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:34:10.550547    3196 main.go:141] libmachine: STDERR: 
	I0719 16:34:10.550562    3196 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/disk.qcow2
	I0719 16:34:10.550569    3196 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:34:10.550613    3196 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:5d:1d:0a:34:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/disk.qcow2
	I0719 16:34:10.552117    3196 main.go:141] libmachine: STDOUT: 
	I0719 16:34:10.552131    3196 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:34:10.552156    3196 client.go:171] LocalClient.Create took 203.22625ms
	I0719 16:34:12.554282    3196 start.go:128] duration metric: createHost completed in 2.229469459s
	I0719 16:34:12.554351    3196 start.go:83] releasing machines lock for "multinode-992000", held for 2.229594292s
	W0719 16:34:12.554417    3196 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:34:12.562868    3196 out.go:177] * Deleting "multinode-992000" in qemu2 ...
	W0719 16:34:12.582361    3196 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:34:12.582385    3196 start.go:687] Will try again in 5 seconds ...
	I0719 16:34:17.583195    3196 start.go:365] acquiring machines lock for multinode-992000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:34:17.583616    3196 start.go:369] acquired machines lock for "multinode-992000" in 327.208µs
	I0719 16:34:17.583701    3196 start.go:93] Provisioning new machine with config: &{Name:multinode-992000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-
992000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:34:17.584027    3196 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:34:17.595680    3196 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 16:34:17.642459    3196 start.go:159] libmachine.API.Create for "multinode-992000" (driver="qemu2")
	I0719 16:34:17.642492    3196 client.go:168] LocalClient.Create starting
	I0719 16:34:17.642624    3196 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:34:17.642698    3196 main.go:141] libmachine: Decoding PEM data...
	I0719 16:34:17.642719    3196 main.go:141] libmachine: Parsing certificate...
	I0719 16:34:17.642800    3196 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:34:17.642841    3196 main.go:141] libmachine: Decoding PEM data...
	I0719 16:34:17.642855    3196 main.go:141] libmachine: Parsing certificate...
	I0719 16:34:17.643389    3196 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:34:17.768268    3196 main.go:141] libmachine: Creating SSH key...
	I0719 16:34:18.033529    3196 main.go:141] libmachine: Creating Disk image...
	I0719 16:34:18.033538    3196 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:34:18.033687    3196 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/disk.qcow2
	I0719 16:34:18.042296    3196 main.go:141] libmachine: STDOUT: 
	I0719 16:34:18.042360    3196 main.go:141] libmachine: STDERR: 
	I0719 16:34:18.042423    3196 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/disk.qcow2 +20000M
	I0719 16:34:18.049827    3196 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:34:18.049840    3196 main.go:141] libmachine: STDERR: 
	I0719 16:34:18.049886    3196 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/disk.qcow2
	I0719 16:34:18.049892    3196 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:34:18.049941    3196 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:11:c6:9b:e2:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/disk.qcow2
	I0719 16:34:18.051494    3196 main.go:141] libmachine: STDOUT: 
	I0719 16:34:18.051507    3196 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:34:18.051520    3196 client.go:171] LocalClient.Create took 409.031083ms
	I0719 16:34:20.053659    3196 start.go:128] duration metric: createHost completed in 2.469646167s
	I0719 16:34:20.053746    3196 start.go:83] releasing machines lock for "multinode-992000", held for 2.470148417s
	W0719 16:34:20.054240    3196 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-992000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-992000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:34:20.063000    3196 out.go:177] 
	W0719 16:34:20.066984    3196 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:34:20.067008    3196 out.go:239] * 
	* 
	W0719 16:34:20.069843    3196 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:34:20.075989    3196 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:87: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-992000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-992000 -n multinode-992000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-992000 -n multinode-992000: exit status 7 (66.335917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-992000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (109.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-992000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:481: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-992000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (125.642542ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-992000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:483: failed to create busybox deployment to multinode cluster
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-992000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-992000 -- rollout status deployment/busybox: exit status 1 (55.126792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-992000"

                                                
                                                
** /stderr **
multinode_test.go:488: failed to deploy busybox to multinode cluster
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-992000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-992000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (54.620292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-992000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-992000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-992000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.630208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-992000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-992000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-992000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.05ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-992000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-992000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-992000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.537417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-992000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-992000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-992000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.514666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-992000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-992000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-992000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.67125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-992000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-992000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-992000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.170291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-992000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-992000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-992000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.573542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-992000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-992000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-992000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.627417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-992000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0719 16:35:39.480963    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/functional-001000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-992000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-992000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.720709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-992000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0719 16:36:02.204164    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/client.crt: no such file or directory
E0719 16:36:02.210536    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/client.crt: no such file or directory
E0719 16:36:02.222612    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/client.crt: no such file or directory
E0719 16:36:02.244214    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/client.crt: no such file or directory
E0719 16:36:02.286273    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/client.crt: no such file or directory
E0719 16:36:02.368349    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/client.crt: no such file or directory
E0719 16:36:02.530437    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/client.crt: no such file or directory
E0719 16:36:02.852653    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/client.crt: no such file or directory
E0719 16:36:03.494917    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/client.crt: no such file or directory
E0719 16:36:04.777304    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/client.crt: no such file or directory
E0719 16:36:07.339552    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-992000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-992000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.663708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-992000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:512: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-992000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:516: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-992000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (54.654791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-992000"

                                                
                                                
** /stderr **
multinode_test.go:518: failed get Pod names
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-992000 -- exec  -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-992000 -- exec  -- nslookup kubernetes.io: exit status 1 (53.103167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-992000"

                                                
                                                
** /stderr **
multinode_test.go:526: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-992000 -- exec  -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-992000 -- exec  -- nslookup kubernetes.default: exit status 1 (53.607875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-992000"

                                                
                                                
** /stderr **
multinode_test.go:536: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-992000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-992000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (53.994958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-992000"

                                                
                                                
** /stderr **
multinode_test.go:544: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-992000 -n multinode-992000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-992000 -n multinode-992000: exit status 7 (28.619875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-992000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (109.06s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-992000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-992000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (53.33725ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-992000"

                                                
                                                
** /stderr **
multinode_test.go:554: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-992000 -n multinode-992000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-992000 -n multinode-992000: exit status 7 (28.701ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-992000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-992000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-992000 -v 3 --alsologtostderr: exit status 89 (38.701709ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-992000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:36:09.330224    3280 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:36:09.330398    3280 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:36:09.330403    3280 out.go:309] Setting ErrFile to fd 2...
	I0719 16:36:09.330406    3280 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:36:09.330514    3280 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:36:09.330737    3280 mustload.go:65] Loading cluster: multinode-992000
	I0719 16:36:09.330917    3280 config.go:182] Loaded profile config "multinode-992000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:36:09.335377    3280 out.go:177] * The control plane node must be running for this command
	I0719 16:36:09.338342    3280 out.go:177]   To start a cluster, run: "minikube start -p multinode-992000"

                                                
                                                
** /stderr **
multinode_test.go:112: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-992000 -v 3 --alsologtostderr" : exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-992000 -n multinode-992000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-992000 -n multinode-992000: exit status 7 (28.409958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-992000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:155: expected profile "multinode-992000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-992000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-992000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"Do
ckerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.27.3\",\"ClusterName\":\"multinode-992000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1
.27.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHA
uthSock\":\"\",\"SSHAgentPID\":0},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-992000 -n multinode-992000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-992000 -n multinode-992000: exit status 7 (33.324583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-992000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.17s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-992000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-992000 status --output json --alsologtostderr: exit status 7 (28.915458ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-992000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:36:09.571588    3290 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:36:09.571724    3290 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:36:09.571727    3290 out.go:309] Setting ErrFile to fd 2...
	I0719 16:36:09.571729    3290 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:36:09.571844    3290 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:36:09.571961    3290 out.go:303] Setting JSON to true
	I0719 16:36:09.571979    3290 mustload.go:65] Loading cluster: multinode-992000
	I0719 16:36:09.572044    3290 notify.go:220] Checking for updates...
	I0719 16:36:09.572154    3290 config.go:182] Loaded profile config "multinode-992000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:36:09.572159    3290 status.go:255] checking status of multinode-992000 ...
	I0719 16:36:09.572355    3290 status.go:330] multinode-992000 host status = "Stopped" (err=<nil>)
	I0719 16:36:09.572359    3290 status.go:343] host is not running, skipping remaining checks
	I0719 16:36:09.572361    3290 status.go:257] multinode-992000 status: &{Name:multinode-992000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:180: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-992000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-992000 -n multinode-992000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-992000 -n multinode-992000: exit status 7 (28.495542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-992000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-992000 node stop m03
multinode_test.go:210: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-992000 node stop m03: exit status 85 (46.019916ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:212: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-992000 node stop m03": exit status 85
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-992000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-992000 status: exit status 7 (28.840959ms)

                                                
                                                
-- stdout --
	multinode-992000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-992000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-992000 status --alsologtostderr: exit status 7 (28.646791ms)

                                                
                                                
-- stdout --
	multinode-992000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:36:09.704416    3298 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:36:09.704545    3298 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:36:09.704550    3298 out.go:309] Setting ErrFile to fd 2...
	I0719 16:36:09.704552    3298 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:36:09.704672    3298 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:36:09.704783    3298 out.go:303] Setting JSON to false
	I0719 16:36:09.704796    3298 mustload.go:65] Loading cluster: multinode-992000
	I0719 16:36:09.704858    3298 notify.go:220] Checking for updates...
	I0719 16:36:09.704970    3298 config.go:182] Loaded profile config "multinode-992000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:36:09.704975    3298 status.go:255] checking status of multinode-992000 ...
	I0719 16:36:09.705179    3298 status.go:330] multinode-992000 host status = "Stopped" (err=<nil>)
	I0719 16:36:09.705183    3298 status.go:343] host is not running, skipping remaining checks
	I0719 16:36:09.705185    3298 status.go:257] multinode-992000 status: &{Name:multinode-992000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-992000 status --alsologtostderr": multinode-992000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-992000 -n multinode-992000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-992000 -n multinode-992000: exit status 7 (28.492333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-992000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-992000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-992000 node start m03 --alsologtostderr: exit status 85 (42.603292ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:36:09.762019    3302 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:36:09.762205    3302 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:36:09.762208    3302 out.go:309] Setting ErrFile to fd 2...
	I0719 16:36:09.762210    3302 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:36:09.762321    3302 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:36:09.762549    3302 mustload.go:65] Loading cluster: multinode-992000
	I0719 16:36:09.762730    3302 config.go:182] Loaded profile config "multinode-992000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:36:09.765608    3302 out.go:177] 
	W0719 16:36:09.768615    3302 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0719 16:36:09.768619    3302 out.go:239] * 
	* 
	W0719 16:36:09.770244    3302 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:36:09.773639    3302 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I0719 16:36:09.762019    3302 out.go:296] Setting OutFile to fd 1 ...
I0719 16:36:09.762205    3302 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0719 16:36:09.762208    3302 out.go:309] Setting ErrFile to fd 2...
I0719 16:36:09.762210    3302 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0719 16:36:09.762321    3302 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
I0719 16:36:09.762549    3302 mustload.go:65] Loading cluster: multinode-992000
I0719 16:36:09.762730    3302 config.go:182] Loaded profile config "multinode-992000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0719 16:36:09.765608    3302 out.go:177] 
W0719 16:36:09.768615    3302 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0719 16:36:09.768619    3302 out.go:239] * 
* 
W0719 16:36:09.770244    3302 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0719 16:36:09.773639    3302 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-992000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-992000 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-992000 status: exit status 7 (28.7125ms)

                                                
                                                
-- stdout --
	multinode-992000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-992000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-992000 -n multinode-992000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-992000 -n multinode-992000: exit status 7 (28.893625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-992000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (5.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-992000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-992000
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-992000 --wait=true -v=8 --alsologtostderr
E0719 16:36:12.461986    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/client.crt: no such file or directory
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-992000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.169168792s)

                                                
                                                
-- stdout --
	* [multinode-992000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-992000 in cluster multinode-992000
	* Restarting existing qemu2 VM for "multinode-992000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-992000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:36:09.949464    3312 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:36:09.949568    3312 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:36:09.949572    3312 out.go:309] Setting ErrFile to fd 2...
	I0719 16:36:09.949574    3312 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:36:09.949690    3312 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:36:09.950627    3312 out.go:303] Setting JSON to false
	I0719 16:36:09.965899    3312 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3940,"bootTime":1689805829,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:36:09.965966    3312 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:36:09.970630    3312 out.go:177] * [multinode-992000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:36:09.976619    3312 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:36:09.980588    3312 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:36:09.976676    3312 notify.go:220] Checking for updates...
	I0719 16:36:09.983593    3312 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:36:09.986567    3312 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:36:09.989609    3312 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:36:09.990911    3312 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:36:09.993869    3312 config.go:182] Loaded profile config "multinode-992000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:36:09.993908    3312 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:36:09.998506    3312 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 16:36:10.003585    3312 start.go:298] selected driver: qemu2
	I0719 16:36:10.003594    3312 start.go:880] validating driver "qemu2" against &{Name:multinode-992000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-992
000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:36:10.003674    3312 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:36:10.005414    3312 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 16:36:10.005433    3312 cni.go:84] Creating CNI manager for ""
	I0719 16:36:10.005437    3312 cni.go:136] 1 nodes found, recommending kindnet
	I0719 16:36:10.005443    3312 start_flags.go:319] config:
	{Name:multinode-992000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-992000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:36:10.009263    3312 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:36:10.016537    3312 out.go:177] * Starting control plane node multinode-992000 in cluster multinode-992000
	I0719 16:36:10.020597    3312 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 16:36:10.020619    3312 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 16:36:10.020629    3312 cache.go:57] Caching tarball of preloaded images
	I0719 16:36:10.020681    3312 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 16:36:10.020695    3312 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0719 16:36:10.020744    3312 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/multinode-992000/config.json ...
	I0719 16:36:10.021091    3312 start.go:365] acquiring machines lock for multinode-992000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:36:10.021125    3312 start.go:369] acquired machines lock for "multinode-992000" in 28.375µs
	I0719 16:36:10.021134    3312 start.go:96] Skipping create...Using existing machine configuration
	I0719 16:36:10.021139    3312 fix.go:54] fixHost starting: 
	I0719 16:36:10.021256    3312 fix.go:102] recreateIfNeeded on multinode-992000: state=Stopped err=<nil>
	W0719 16:36:10.021264    3312 fix.go:128] unexpected machine state, will restart: <nil>
	I0719 16:36:10.029541    3312 out.go:177] * Restarting existing qemu2 VM for "multinode-992000" ...
	I0719 16:36:10.033613    3312 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:11:c6:9b:e2:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/disk.qcow2
	I0719 16:36:10.035419    3312 main.go:141] libmachine: STDOUT: 
	I0719 16:36:10.035436    3312 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:36:10.035466    3312 fix.go:56] fixHost completed within 14.3285ms
	I0719 16:36:10.035471    3312 start.go:83] releasing machines lock for "multinode-992000", held for 14.342708ms
	W0719 16:36:10.035479    3312 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:36:10.035519    3312 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:36:10.035524    3312 start.go:687] Will try again in 5 seconds ...
	I0719 16:36:15.037640    3312 start.go:365] acquiring machines lock for multinode-992000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:36:15.038029    3312 start.go:369] acquired machines lock for "multinode-992000" in 310.958µs
	I0719 16:36:15.038151    3312 start.go:96] Skipping create...Using existing machine configuration
	I0719 16:36:15.038171    3312 fix.go:54] fixHost starting: 
	I0719 16:36:15.038898    3312 fix.go:102] recreateIfNeeded on multinode-992000: state=Stopped err=<nil>
	W0719 16:36:15.038924    3312 fix.go:128] unexpected machine state, will restart: <nil>
	I0719 16:36:15.043308    3312 out.go:177] * Restarting existing qemu2 VM for "multinode-992000" ...
	I0719 16:36:15.047486    3312 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:11:c6:9b:e2:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/disk.qcow2
	I0719 16:36:15.056940    3312 main.go:141] libmachine: STDOUT: 
	I0719 16:36:15.057007    3312 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:36:15.057092    3312 fix.go:56] fixHost completed within 18.924125ms
	I0719 16:36:15.057175    3312 start.go:83] releasing machines lock for "multinode-992000", held for 19.124416ms
	W0719 16:36:15.057406    3312 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-992000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-992000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:36:15.065311    3312 out.go:177] 
	W0719 16:36:15.069430    3312 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:36:15.069468    3312 out.go:239] * 
	* 
	W0719 16:36:15.071802    3312 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:36:15.079063    3312 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-992000" : exit status 80
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-992000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-992000 -n multinode-992000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-992000 -n multinode-992000: exit status 7 (32.712917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-992000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (5.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-992000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-992000 node delete m03: exit status 89 (38.524916ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-992000"

                                                
                                                
-- /stdout --
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-992000 node delete m03": exit status 89
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-992000 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-992000 status --alsologtostderr: exit status 7 (28.964583ms)

                                                
                                                
-- stdout --
	multinode-992000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:36:15.259235    3326 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:36:15.259369    3326 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:36:15.259371    3326 out.go:309] Setting ErrFile to fd 2...
	I0719 16:36:15.259374    3326 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:36:15.259494    3326 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:36:15.259602    3326 out.go:303] Setting JSON to false
	I0719 16:36:15.259612    3326 mustload.go:65] Loading cluster: multinode-992000
	I0719 16:36:15.259680    3326 notify.go:220] Checking for updates...
	I0719 16:36:15.259800    3326 config.go:182] Loaded profile config "multinode-992000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:36:15.259805    3326 status.go:255] checking status of multinode-992000 ...
	I0719 16:36:15.259989    3326 status.go:330] multinode-992000 host status = "Stopped" (err=<nil>)
	I0719 16:36:15.259995    3326 status.go:343] host is not running, skipping remaining checks
	I0719 16:36:15.259997    3326 status.go:257] multinode-992000 status: &{Name:multinode-992000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-992000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-992000 -n multinode-992000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-992000 -n multinode-992000: exit status 7 (28.394708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-992000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-992000 stop
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-992000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-992000 status: exit status 7 (29.063ms)

                                                
                                                
-- stdout --
	multinode-992000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-992000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-992000 status --alsologtostderr: exit status 7 (28.628792ms)

                                                
                                                
-- stdout --
	multinode-992000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:36:15.403191    3334 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:36:15.403305    3334 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:36:15.403308    3334 out.go:309] Setting ErrFile to fd 2...
	I0719 16:36:15.403310    3334 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:36:15.403420    3334 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:36:15.403524    3334 out.go:303] Setting JSON to false
	I0719 16:36:15.403538    3334 mustload.go:65] Loading cluster: multinode-992000
	I0719 16:36:15.403589    3334 notify.go:220] Checking for updates...
	I0719 16:36:15.403714    3334 config.go:182] Loaded profile config "multinode-992000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:36:15.403719    3334 status.go:255] checking status of multinode-992000 ...
	I0719 16:36:15.403907    3334 status.go:330] multinode-992000 host status = "Stopped" (err=<nil>)
	I0719 16:36:15.403912    3334 status.go:343] host is not running, skipping remaining checks
	I0719 16:36:15.403914    3334 status.go:257] multinode-992000 status: &{Name:multinode-992000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-992000 status --alsologtostderr": multinode-992000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-992000 status --alsologtostderr": multinode-992000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-992000 -n multinode-992000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-992000 -n multinode-992000: exit status 7 (28.358958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-992000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-992000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-992000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.180238542s)

                                                
                                                
-- stdout --
	* [multinode-992000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-992000 in cluster multinode-992000
	* Restarting existing qemu2 VM for "multinode-992000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-992000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:36:15.459467    3338 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:36:15.459576    3338 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:36:15.459580    3338 out.go:309] Setting ErrFile to fd 2...
	I0719 16:36:15.459582    3338 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:36:15.459689    3338 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:36:15.460655    3338 out.go:303] Setting JSON to false
	I0719 16:36:15.475489    3338 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3946,"bootTime":1689805829,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:36:15.475560    3338 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:36:15.480116    3338 out.go:177] * [multinode-992000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:36:15.486861    3338 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:36:15.486910    3338 notify.go:220] Checking for updates...
	I0719 16:36:15.491015    3338 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:36:15.493996    3338 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:36:15.495467    3338 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:36:15.499000    3338 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:36:15.502036    3338 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:36:15.505286    3338 config.go:182] Loaded profile config "multinode-992000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:36:15.505535    3338 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:36:15.509939    3338 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 16:36:15.517010    3338 start.go:298] selected driver: qemu2
	I0719 16:36:15.517015    3338 start.go:880] validating driver "qemu2" against &{Name:multinode-992000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-992
000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:36:15.517083    3338 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:36:15.518864    3338 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 16:36:15.518883    3338 cni.go:84] Creating CNI manager for ""
	I0719 16:36:15.518888    3338 cni.go:136] 1 nodes found, recommending kindnet
	I0719 16:36:15.518893    3338 start_flags.go:319] config:
	{Name:multinode-992000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-992000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:36:15.522670    3338 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:36:15.529993    3338 out.go:177] * Starting control plane node multinode-992000 in cluster multinode-992000
	I0719 16:36:15.533986    3338 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 16:36:15.534012    3338 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 16:36:15.534023    3338 cache.go:57] Caching tarball of preloaded images
	I0719 16:36:15.534084    3338 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 16:36:15.534089    3338 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0719 16:36:15.534147    3338 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/multinode-992000/config.json ...
	I0719 16:36:15.534425    3338 start.go:365] acquiring machines lock for multinode-992000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:36:15.534450    3338 start.go:369] acquired machines lock for "multinode-992000" in 19.875µs
	I0719 16:36:15.534459    3338 start.go:96] Skipping create...Using existing machine configuration
	I0719 16:36:15.534464    3338 fix.go:54] fixHost starting: 
	I0719 16:36:15.534603    3338 fix.go:102] recreateIfNeeded on multinode-992000: state=Stopped err=<nil>
	W0719 16:36:15.534611    3338 fix.go:128] unexpected machine state, will restart: <nil>
	I0719 16:36:15.542083    3338 out.go:177] * Restarting existing qemu2 VM for "multinode-992000" ...
	I0719 16:36:15.550032    3338 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:11:c6:9b:e2:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/disk.qcow2
	I0719 16:36:15.552088    3338 main.go:141] libmachine: STDOUT: 
	I0719 16:36:15.552105    3338 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:36:15.552136    3338 fix.go:56] fixHost completed within 17.671459ms
	I0719 16:36:15.552141    3338 start.go:83] releasing machines lock for "multinode-992000", held for 17.687375ms
	W0719 16:36:15.552152    3338 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:36:15.552207    3338 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:36:15.552211    3338 start.go:687] Will try again in 5 seconds ...
	I0719 16:36:20.554249    3338 start.go:365] acquiring machines lock for multinode-992000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:36:20.554966    3338 start.go:369] acquired machines lock for "multinode-992000" in 634.084µs
	I0719 16:36:20.555114    3338 start.go:96] Skipping create...Using existing machine configuration
	I0719 16:36:20.555136    3338 fix.go:54] fixHost starting: 
	I0719 16:36:20.555881    3338 fix.go:102] recreateIfNeeded on multinode-992000: state=Stopped err=<nil>
	W0719 16:36:20.555909    3338 fix.go:128] unexpected machine state, will restart: <nil>
	I0719 16:36:20.564318    3338 out.go:177] * Restarting existing qemu2 VM for "multinode-992000" ...
	I0719 16:36:20.568401    3338 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:11:c6:9b:e2:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/multinode-992000/disk.qcow2
	I0719 16:36:20.577874    3338 main.go:141] libmachine: STDOUT: 
	I0719 16:36:20.577924    3338 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:36:20.577994    3338 fix.go:56] fixHost completed within 22.860542ms
	I0719 16:36:20.578011    3338 start.go:83] releasing machines lock for "multinode-992000", held for 23.02125ms
	W0719 16:36:20.578291    3338 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-992000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-992000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:36:20.586281    3338 out.go:177] 
	W0719 16:36:20.590364    3338 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:36:20.590393    3338 out.go:239] * 
	* 
	W0719 16:36:20.592861    3338 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:36:20.601249    3338 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-992000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-992000 -n multinode-992000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-992000 -n multinode-992000: exit status 7 (68.164083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-992000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-992000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-992000-m01 --driver=qemu2 
E0719 16:36:22.704398    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/client.crt: no such file or directory
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-992000-m01 --driver=qemu2 : exit status 80 (9.920795083s)

                                                
                                                
-- stdout --
	* [multinode-992000-m01] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-992000-m01 in cluster multinode-992000-m01
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-992000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-992000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-992000-m02 --driver=qemu2 
multinode_test.go:460: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-992000-m02 --driver=qemu2 : exit status 80 (9.92921975s)

                                                
                                                
-- stdout --
	* [multinode-992000-m02] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-992000-m02 in cluster multinode-992000-m02
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-992000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-992000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:462: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-992000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-992000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-992000: exit status 89 (78.779459ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-992000"

                                                
                                                
-- /stdout --
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-992000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-992000 -n multinode-992000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-992000 -n multinode-992000: exit status 7 (31.88725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-992000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.13s)

                                                
                                    
x
+
TestPreload (10.06s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-407000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
E0719 16:36:43.186661    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/client.crt: no such file or directory
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-407000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.895075584s)

                                                
                                                
-- stdout --
	* [test-preload-407000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node test-preload-407000 in cluster test-preload-407000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-407000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:36:40.964659    3392 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:36:40.964785    3392 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:36:40.964788    3392 out.go:309] Setting ErrFile to fd 2...
	I0719 16:36:40.964792    3392 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:36:40.964894    3392 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:36:40.965923    3392 out.go:303] Setting JSON to false
	I0719 16:36:40.981037    3392 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3971,"bootTime":1689805829,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:36:40.981118    3392 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:36:40.985433    3392 out.go:177] * [test-preload-407000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:36:40.993448    3392 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:36:40.997408    3392 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:36:40.993534    3392 notify.go:220] Checking for updates...
	I0719 16:36:41.000354    3392 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:36:41.003429    3392 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:36:41.006409    3392 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:36:41.009413    3392 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:36:41.012736    3392 config.go:182] Loaded profile config "multinode-992000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:36:41.012784    3392 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:36:41.017335    3392 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 16:36:41.024425    3392 start.go:298] selected driver: qemu2
	I0719 16:36:41.024430    3392 start.go:880] validating driver "qemu2" against <nil>
	I0719 16:36:41.024437    3392 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:36:41.026248    3392 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0719 16:36:41.029369    3392 out.go:177] * Automatically selected the socket_vmnet network
	I0719 16:36:41.032498    3392 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 16:36:41.032521    3392 cni.go:84] Creating CNI manager for ""
	I0719 16:36:41.032530    3392 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 16:36:41.032541    3392 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 16:36:41.032548    3392 start_flags.go:319] config:
	{Name:test-preload-407000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-407000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Net
workPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:36:41.036719    3392 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:36:41.043400    3392 out.go:177] * Starting control plane node test-preload-407000 in cluster test-preload-407000
	I0719 16:36:41.047415    3392 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0719 16:36:41.047496    3392 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/test-preload-407000/config.json ...
	I0719 16:36:41.047519    3392 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/test-preload-407000/config.json: {Name:mk03bf4f23e8f5b2804812a91a243ea70320bacd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:36:41.047523    3392 cache.go:107] acquiring lock: {Name:mkb6e4b49426b30d645e71f11c11c3ac599de6e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:36:41.047521    3392 cache.go:107] acquiring lock: {Name:mk7803e5d16883f92db6d35161b7ee419dcd1d5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:36:41.047546    3392 cache.go:107] acquiring lock: {Name:mk1aa22aeb6edea257a477ea08b7a53677d0fed9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:36:41.047566    3392 cache.go:107] acquiring lock: {Name:mk4d0557a268c8978dedf90fc530da605b6dc075 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:36:41.047580    3392 cache.go:107] acquiring lock: {Name:mkc3ae760bf98f6f94ade385732ba74455ba43ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:36:41.047714    3392 cache.go:107] acquiring lock: {Name:mkff43e59907ea5585da2346f9d543e4d4229b45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:36:41.047765    3392 cache.go:107] acquiring lock: {Name:mk5b245c9f44bac6b2ddeed554967152cf1f84b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:36:41.048059    3392 cache.go:107] acquiring lock: {Name:mk2ce01a1e8549830524b78d7ce315d4d6a17c1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:36:41.048088    3392 start.go:365] acquiring machines lock for test-preload-407000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:36:41.048194    3392 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0719 16:36:41.048227    3392 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0719 16:36:41.048237    3392 start.go:369] acquired machines lock for "test-preload-407000" in 97.583µs
	I0719 16:36:41.048262    3392 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0719 16:36:41.048263    3392 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0719 16:36:41.048328    3392 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0719 16:36:41.048266    3392 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0719 16:36:41.048269    3392 start.go:93] Provisioning new machine with config: &{Name:test-preload-407000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-pr
eload-407000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:36:41.048454    3392 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:36:41.057417    3392 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 16:36:41.048496    3392 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 16:36:41.048595    3392 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0719 16:36:41.061386    3392 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0719 16:36:41.065271    3392 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0719 16:36:41.065381    3392 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0719 16:36:41.065535    3392 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0719 16:36:41.067510    3392 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0719 16:36:41.067589    3392 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0719 16:36:41.067783    3392 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0719 16:36:41.068036    3392 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0719 16:36:41.073877    3392 start.go:159] libmachine.API.Create for "test-preload-407000" (driver="qemu2")
	I0719 16:36:41.073899    3392 client.go:168] LocalClient.Create starting
	I0719 16:36:41.073977    3392 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:36:41.074004    3392 main.go:141] libmachine: Decoding PEM data...
	I0719 16:36:41.074019    3392 main.go:141] libmachine: Parsing certificate...
	I0719 16:36:41.074065    3392 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:36:41.074080    3392 main.go:141] libmachine: Decoding PEM data...
	I0719 16:36:41.074089    3392 main.go:141] libmachine: Parsing certificate...
	I0719 16:36:41.074398    3392 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:36:41.190294    3392 main.go:141] libmachine: Creating SSH key...
	I0719 16:36:41.464233    3392 main.go:141] libmachine: Creating Disk image...
	I0719 16:36:41.464253    3392 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:36:41.464429    3392 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/test-preload-407000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/test-preload-407000/disk.qcow2
	I0719 16:36:41.473072    3392 main.go:141] libmachine: STDOUT: 
	I0719 16:36:41.473091    3392 main.go:141] libmachine: STDERR: 
	I0719 16:36:41.473156    3392 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/test-preload-407000/disk.qcow2 +20000M
	I0719 16:36:41.480865    3392 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:36:41.480886    3392 main.go:141] libmachine: STDERR: 
	I0719 16:36:41.480908    3392 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/test-preload-407000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/test-preload-407000/disk.qcow2
	I0719 16:36:41.480919    3392 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:36:41.480965    3392 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/test-preload-407000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/test-preload-407000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/test-preload-407000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:24:9b:5c:49:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/test-preload-407000/disk.qcow2
	I0719 16:36:41.482498    3392 main.go:141] libmachine: STDOUT: 
	I0719 16:36:41.482511    3392 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:36:41.482530    3392 client.go:171] LocalClient.Create took 408.633792ms
	I0719 16:36:42.351894    3392 cache.go:162] opening:  /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0719 16:36:42.404078    3392 cache.go:162] opening:  /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0719 16:36:42.411020    3392 cache.go:162] opening:  /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	W0719 16:36:42.492060    3392 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0719 16:36:42.492101    3392 cache.go:162] opening:  /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0719 16:36:42.518857    3392 cache.go:157] /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0719 16:36:42.518868    3392 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.47133375s
	I0719 16:36:42.518877    3392 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0719 16:36:42.717121    3392 cache.go:162] opening:  /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0719 16:36:42.939763    3392 cache.go:162] opening:  /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	W0719 16:36:43.078417    3392 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0719 16:36:43.078541    3392 cache.go:162] opening:  /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0719 16:36:43.165206    3392 cache.go:162] opening:  /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0719 16:36:43.482816    3392 start.go:128] duration metric: createHost completed in 2.434371667s
	I0719 16:36:43.482897    3392 start.go:83] releasing machines lock for "test-preload-407000", held for 2.434687333s
	W0719 16:36:43.482953    3392 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:36:43.492049    3392 out.go:177] * Deleting "test-preload-407000" in qemu2 ...
	W0719 16:36:43.511870    3392 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:36:43.511901    3392 start.go:687] Will try again in 5 seconds ...
	I0719 16:36:43.848774    3392 cache.go:157] /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0719 16:36:43.848808    3392 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.8013345s
	I0719 16:36:43.848834    3392 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0719 16:36:44.879855    3392 cache.go:157] /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0719 16:36:44.879902    3392 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.832384s
	I0719 16:36:44.879933    3392 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0719 16:36:44.934508    3392 cache.go:157] /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0719 16:36:44.934542    3392 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.8865755s
	I0719 16:36:44.934565    3392 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0719 16:36:46.337768    3392 cache.go:157] /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0719 16:36:46.337815    3392 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.290378875s
	I0719 16:36:46.337847    3392 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0719 16:36:46.687960    3392 cache.go:157] /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0719 16:36:46.688008    3392 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.640598333s
	I0719 16:36:46.688035    3392 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0719 16:36:47.972922    3392 cache.go:157] /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0719 16:36:47.972969    3392 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.925366542s
	I0719 16:36:47.972998    3392 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0719 16:36:48.511983    3392 start.go:365] acquiring machines lock for test-preload-407000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:36:48.512475    3392 start.go:369] acquired machines lock for "test-preload-407000" in 412.5µs
	I0719 16:36:48.512585    3392 start.go:93] Provisioning new machine with config: &{Name:test-preload-407000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-pr
eload-407000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:36:48.512828    3392 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:36:48.521373    3392 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 16:36:48.567818    3392 start.go:159] libmachine.API.Create for "test-preload-407000" (driver="qemu2")
	I0719 16:36:48.567858    3392 client.go:168] LocalClient.Create starting
	I0719 16:36:48.567974    3392 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:36:48.568034    3392 main.go:141] libmachine: Decoding PEM data...
	I0719 16:36:48.568068    3392 main.go:141] libmachine: Parsing certificate...
	I0719 16:36:48.568131    3392 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:36:48.568160    3392 main.go:141] libmachine: Decoding PEM data...
	I0719 16:36:48.568171    3392 main.go:141] libmachine: Parsing certificate...
	I0719 16:36:48.568681    3392 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:36:48.694635    3392 main.go:141] libmachine: Creating SSH key...
	I0719 16:36:48.778455    3392 main.go:141] libmachine: Creating Disk image...
	I0719 16:36:48.778463    3392 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:36:48.778591    3392 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/test-preload-407000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/test-preload-407000/disk.qcow2
	I0719 16:36:48.787050    3392 main.go:141] libmachine: STDOUT: 
	I0719 16:36:48.787063    3392 main.go:141] libmachine: STDERR: 
	I0719 16:36:48.787115    3392 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/test-preload-407000/disk.qcow2 +20000M
	I0719 16:36:48.794425    3392 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:36:48.794436    3392 main.go:141] libmachine: STDERR: 
	I0719 16:36:48.794449    3392 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/test-preload-407000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/test-preload-407000/disk.qcow2
	I0719 16:36:48.794457    3392 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:36:48.794499    3392 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/test-preload-407000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/test-preload-407000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/test-preload-407000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:76:58:69:37:86 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/test-preload-407000/disk.qcow2
	I0719 16:36:48.796053    3392 main.go:141] libmachine: STDOUT: 
	I0719 16:36:48.796065    3392 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:36:48.796077    3392 client.go:171] LocalClient.Create took 228.216833ms
	I0719 16:36:50.796932    3392 start.go:128] duration metric: createHost completed in 2.284092959s
	I0719 16:36:50.796976    3392 start.go:83] releasing machines lock for "test-preload-407000", held for 2.284519625s
	W0719 16:36:50.797206    3392 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-407000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-407000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:36:50.805636    3392 out.go:177] 
	W0719 16:36:50.809807    3392 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:36:50.809838    3392 out.go:239] * 
	* 
	W0719 16:36:50.812488    3392 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:36:50.820714    3392 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-407000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:522: *** TestPreload FAILED at 2023-07-19 16:36:50.834092 -0700 PDT m=+2738.883097543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-407000 -n test-preload-407000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-407000 -n test-preload-407000: exit status 7 (66.942583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-407000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-407000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-407000
--- FAIL: TestPreload (10.06s)

                                                
                                    
x
+
TestScheduledStopUnix (9.89s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-861000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-861000 --memory=2048 --driver=qemu2 : exit status 80 (9.719381833s)

                                                
                                                
-- stdout --
	* [scheduled-stop-861000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-861000 in cluster scheduled-stop-861000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-861000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-861000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-861000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-861000 in cluster scheduled-stop-861000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-861000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-861000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestScheduledStopUnix FAILED at 2023-07-19 16:37:00.715733 -0700 PDT m=+2748.764914335
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-861000 -n scheduled-stop-861000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-861000 -n scheduled-stop-861000: exit status 7 (68.183667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-861000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-861000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-861000
--- FAIL: TestScheduledStopUnix (9.89s)

                                                
                                    
x
+
TestSkaffold (13.13s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe232131574 version
skaffold_test.go:63: skaffold version: v2.6.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-801000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-801000 --memory=2600 --driver=qemu2 : exit status 80 (9.712472542s)

                                                
                                                
-- stdout --
	* [skaffold-801000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-801000 in cluster skaffold-801000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-801000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-801000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-801000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-801000 in cluster skaffold-801000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-801000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-801000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestSkaffold FAILED at 2023-07-19 16:37:13.84794 -0700 PDT m=+2761.897355501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-801000 -n skaffold-801000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-801000 -n skaffold-801000: exit status 7 (63.188584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-801000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-801000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-801000
--- FAIL: TestSkaffold (13.13s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (179s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
E0719 16:37:55.614137    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/functional-001000/client.crt: no such file or directory
E0719 16:38:23.318749    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/functional-001000/client.crt: no such file or directory
E0719 16:38:30.191970    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt: no such file or directory
E0719 16:38:46.067915    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/client.crt: no such file or directory
version_upgrade_test.go:106: v1.6.2 release installation failed: bad response code: 404
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-07-19 16:40:52.590149 -0700 PDT m=+2980.624820210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-324000 -n running-upgrade-324000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-324000 -n running-upgrade-324000: exit status 85 (82.003875ms)

                                                
                                                
-- stdout --
	* Profile "running-upgrade-324000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p running-upgrade-324000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "running-upgrade-324000" host is not running, skipping log retrieval (state="* Profile \"running-upgrade-324000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p running-upgrade-324000\"")
helpers_test.go:175: Cleaning up "running-upgrade-324000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-324000
--- FAIL: TestRunningBinaryUpgrade (179.00s)

                                                
                                    
x
+
TestKubernetesUpgrade (15.29s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-302000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-302000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.786655375s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-302000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubernetes-upgrade-302000 in cluster kubernetes-upgrade-302000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-302000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:40:52.991787    3888 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:40:52.991886    3888 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:40:52.991890    3888 out.go:309] Setting ErrFile to fd 2...
	I0719 16:40:52.991893    3888 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:40:52.991999    3888 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:40:52.993057    3888 out.go:303] Setting JSON to false
	I0719 16:40:53.008082    3888 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4223,"bootTime":1689805829,"procs":407,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:40:53.008156    3888 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:40:53.012482    3888 out.go:177] * [kubernetes-upgrade-302000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:40:53.019439    3888 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:40:53.023553    3888 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:40:53.019477    3888 notify.go:220] Checking for updates...
	I0719 16:40:53.029483    3888 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:40:53.032430    3888 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:40:53.035526    3888 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:40:53.038470    3888 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:40:53.041702    3888 config.go:182] Loaded profile config "cert-expiration-765000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:40:53.041766    3888 config.go:182] Loaded profile config "multinode-992000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:40:53.041812    3888 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:40:53.046445    3888 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 16:40:53.053418    3888 start.go:298] selected driver: qemu2
	I0719 16:40:53.053423    3888 start.go:880] validating driver "qemu2" against <nil>
	I0719 16:40:53.053430    3888 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:40:53.055334    3888 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0719 16:40:53.058449    3888 out.go:177] * Automatically selected the socket_vmnet network
	I0719 16:40:53.061562    3888 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 16:40:53.061591    3888 cni.go:84] Creating CNI manager for ""
	I0719 16:40:53.061597    3888 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0719 16:40:53.061601    3888 start_flags.go:319] config:
	{Name:kubernetes-upgrade-302000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-302000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:40:53.065755    3888 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:40:53.072481    3888 out.go:177] * Starting control plane node kubernetes-upgrade-302000 in cluster kubernetes-upgrade-302000
	I0719 16:40:53.076470    3888 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0719 16:40:53.076498    3888 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0719 16:40:53.076511    3888 cache.go:57] Caching tarball of preloaded images
	I0719 16:40:53.076576    3888 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 16:40:53.076582    3888 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0719 16:40:53.076640    3888 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/kubernetes-upgrade-302000/config.json ...
	I0719 16:40:53.076660    3888 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/kubernetes-upgrade-302000/config.json: {Name:mkcdc112d67c948418983cb1c66aa8db33348845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:40:53.076867    3888 start.go:365] acquiring machines lock for kubernetes-upgrade-302000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:40:53.076897    3888 start.go:369] acquired machines lock for "kubernetes-upgrade-302000" in 21.541µs
	I0719 16:40:53.076909    3888 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-302000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:k
ubernetes-upgrade-302000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:40:53.076940    3888 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:40:53.085465    3888 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 16:40:53.102034    3888 start.go:159] libmachine.API.Create for "kubernetes-upgrade-302000" (driver="qemu2")
	I0719 16:40:53.102056    3888 client.go:168] LocalClient.Create starting
	I0719 16:40:53.102127    3888 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:40:53.102154    3888 main.go:141] libmachine: Decoding PEM data...
	I0719 16:40:53.102170    3888 main.go:141] libmachine: Parsing certificate...
	I0719 16:40:53.102203    3888 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:40:53.102220    3888 main.go:141] libmachine: Decoding PEM data...
	I0719 16:40:53.102228    3888 main.go:141] libmachine: Parsing certificate...
	I0719 16:40:53.102561    3888 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:40:53.217351    3888 main.go:141] libmachine: Creating SSH key...
	I0719 16:40:53.306524    3888 main.go:141] libmachine: Creating Disk image...
	I0719 16:40:53.306531    3888 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:40:53.306691    3888 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubernetes-upgrade-302000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubernetes-upgrade-302000/disk.qcow2
	I0719 16:40:53.315177    3888 main.go:141] libmachine: STDOUT: 
	I0719 16:40:53.315196    3888 main.go:141] libmachine: STDERR: 
	I0719 16:40:53.315249    3888 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubernetes-upgrade-302000/disk.qcow2 +20000M
	I0719 16:40:53.322318    3888 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:40:53.322330    3888 main.go:141] libmachine: STDERR: 
	I0719 16:40:53.322348    3888 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubernetes-upgrade-302000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubernetes-upgrade-302000/disk.qcow2
	I0719 16:40:53.322356    3888 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:40:53.322393    3888 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubernetes-upgrade-302000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubernetes-upgrade-302000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubernetes-upgrade-302000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:4b:21:35:b3:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubernetes-upgrade-302000/disk.qcow2
	I0719 16:40:53.323886    3888 main.go:141] libmachine: STDOUT: 
	I0719 16:40:53.323908    3888 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:40:53.323923    3888 client.go:171] LocalClient.Create took 221.748292ms
	I0719 16:40:55.327097    3888 start.go:128] duration metric: createHost completed in 2.249030625s
	I0719 16:40:55.327158    3888 start.go:83] releasing machines lock for "kubernetes-upgrade-302000", held for 2.249141916s
	W0719 16:40:55.327240    3888 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:40:55.340688    3888 out.go:177] * Deleting "kubernetes-upgrade-302000" in qemu2 ...
	W0719 16:40:55.367544    3888 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:40:55.367584    3888 start.go:687] Will try again in 5 seconds ...
	I0719 16:41:00.371698    3888 start.go:365] acquiring machines lock for kubernetes-upgrade-302000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:41:00.372202    3888 start.go:369] acquired machines lock for "kubernetes-upgrade-302000" in 383.5µs
	I0719 16:41:00.372328    3888 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-302000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:k
ubernetes-upgrade-302000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:41:00.372684    3888 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:41:00.378419    3888 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 16:41:00.429086    3888 start.go:159] libmachine.API.Create for "kubernetes-upgrade-302000" (driver="qemu2")
	I0719 16:41:00.429130    3888 client.go:168] LocalClient.Create starting
	I0719 16:41:00.429263    3888 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:41:00.429332    3888 main.go:141] libmachine: Decoding PEM data...
	I0719 16:41:00.429354    3888 main.go:141] libmachine: Parsing certificate...
	I0719 16:41:00.429449    3888 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:41:00.429497    3888 main.go:141] libmachine: Decoding PEM data...
	I0719 16:41:00.429518    3888 main.go:141] libmachine: Parsing certificate...
	I0719 16:41:00.430093    3888 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:41:00.557276    3888 main.go:141] libmachine: Creating SSH key...
	I0719 16:41:00.697006    3888 main.go:141] libmachine: Creating Disk image...
	I0719 16:41:00.697012    3888 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:41:00.697184    3888 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubernetes-upgrade-302000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubernetes-upgrade-302000/disk.qcow2
	I0719 16:41:00.706094    3888 main.go:141] libmachine: STDOUT: 
	I0719 16:41:00.706109    3888 main.go:141] libmachine: STDERR: 
	I0719 16:41:00.706168    3888 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubernetes-upgrade-302000/disk.qcow2 +20000M
	I0719 16:41:00.713381    3888 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:41:00.713399    3888 main.go:141] libmachine: STDERR: 
	I0719 16:41:00.713418    3888 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubernetes-upgrade-302000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubernetes-upgrade-302000/disk.qcow2
	I0719 16:41:00.713433    3888 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:41:00.713468    3888 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubernetes-upgrade-302000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubernetes-upgrade-302000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubernetes-upgrade-302000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:53:96:af:54:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubernetes-upgrade-302000/disk.qcow2
	I0719 16:41:00.715012    3888 main.go:141] libmachine: STDOUT: 
	I0719 16:41:00.715025    3888 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:41:00.715039    3888 client.go:171] LocalClient.Create took 285.810625ms
	I0719 16:41:02.717835    3888 start.go:128] duration metric: createHost completed in 2.344412s
	I0719 16:41:02.717899    3888 start.go:83] releasing machines lock for "kubernetes-upgrade-302000", held for 2.344950833s
	W0719 16:41:02.718352    3888 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-302000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-302000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:41:02.727834    3888 out.go:177] 
	W0719 16:41:02.731857    3888 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:41:02.731894    3888 out.go:239] * 
	* 
	W0719 16:41:02.734510    3888 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:41:02.742799    3888 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:236: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-302000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-302000
version_upgrade_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-302000 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-302000 status --format={{.Host}}: exit status 7 (35.223208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-302000 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-302000 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.17573s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-302000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-302000 in cluster kubernetes-upgrade-302000
	* Restarting existing qemu2 VM for "kubernetes-upgrade-302000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-302000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:41:02.918005    3913 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:41:02.918149    3913 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:41:02.918152    3913 out.go:309] Setting ErrFile to fd 2...
	I0719 16:41:02.918155    3913 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:41:02.918257    3913 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:41:02.919255    3913 out.go:303] Setting JSON to false
	I0719 16:41:02.934277    3913 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4233,"bootTime":1689805829,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:41:02.934343    3913 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:41:02.938432    3913 out.go:177] * [kubernetes-upgrade-302000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:41:02.945335    3913 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:41:02.949352    3913 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:41:02.945408    3913 notify.go:220] Checking for updates...
	I0719 16:41:02.952378    3913 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:41:02.955339    3913 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:41:02.958331    3913 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:41:02.961338    3913 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:41:02.964597    3913 config.go:182] Loaded profile config "kubernetes-upgrade-302000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0719 16:41:02.964841    3913 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:41:02.969254    3913 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 16:41:02.976273    3913 start.go:298] selected driver: qemu2
	I0719 16:41:02.976277    3913 start.go:880] validating driver "qemu2" against &{Name:kubernetes-upgrade-302000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kube
rnetes-upgrade-302000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:41:02.976346    3913 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:41:02.978223    3913 cni.go:84] Creating CNI manager for ""
	I0719 16:41:02.978238    3913 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 16:41:02.978243    3913 start_flags.go:319] config:
	{Name:kubernetes-upgrade-302000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kubernetes-upgrade-302000 Namespace:default APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:41:02.982233    3913 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:41:02.989287    3913 out.go:177] * Starting control plane node kubernetes-upgrade-302000 in cluster kubernetes-upgrade-302000
	I0719 16:41:02.993313    3913 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 16:41:02.993335    3913 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 16:41:02.993350    3913 cache.go:57] Caching tarball of preloaded images
	I0719 16:41:02.993430    3913 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 16:41:02.993435    3913 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0719 16:41:02.993485    3913 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/kubernetes-upgrade-302000/config.json ...
	I0719 16:41:02.993866    3913 start.go:365] acquiring machines lock for kubernetes-upgrade-302000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:41:02.993896    3913 start.go:369] acquired machines lock for "kubernetes-upgrade-302000" in 24.042µs
	I0719 16:41:02.993906    3913 start.go:96] Skipping create...Using existing machine configuration
	I0719 16:41:02.993911    3913 fix.go:54] fixHost starting: 
	I0719 16:41:02.994026    3913 fix.go:102] recreateIfNeeded on kubernetes-upgrade-302000: state=Stopped err=<nil>
	W0719 16:41:02.994035    3913 fix.go:128] unexpected machine state, will restart: <nil>
	I0719 16:41:02.998323    3913 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-302000" ...
	I0719 16:41:03.006383    3913 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubernetes-upgrade-302000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubernetes-upgrade-302000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubernetes-upgrade-302000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:53:96:af:54:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubernetes-upgrade-302000/disk.qcow2
	I0719 16:41:03.008240    3913 main.go:141] libmachine: STDOUT: 
	I0719 16:41:03.008255    3913 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:41:03.008285    3913 fix.go:56] fixHost completed within 14.370833ms
	I0719 16:41:03.008290    3913 start.go:83] releasing machines lock for "kubernetes-upgrade-302000", held for 14.386334ms
	W0719 16:41:03.008297    3913 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:41:03.008333    3913 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:41:03.008338    3913 start.go:687] Will try again in 5 seconds ...
	I0719 16:41:08.011647    3913 start.go:365] acquiring machines lock for kubernetes-upgrade-302000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:41:08.011967    3913 start.go:369] acquired machines lock for "kubernetes-upgrade-302000" in 247.083µs
	I0719 16:41:08.012108    3913 start.go:96] Skipping create...Using existing machine configuration
	I0719 16:41:08.012129    3913 fix.go:54] fixHost starting: 
	I0719 16:41:08.012845    3913 fix.go:102] recreateIfNeeded on kubernetes-upgrade-302000: state=Stopped err=<nil>
	W0719 16:41:08.012873    3913 fix.go:128] unexpected machine state, will restart: <nil>
	I0719 16:41:08.021234    3913 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-302000" ...
	I0719 16:41:08.024281    3913 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubernetes-upgrade-302000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubernetes-upgrade-302000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubernetes-upgrade-302000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:53:96:af:54:16 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubernetes-upgrade-302000/disk.qcow2
	I0719 16:41:08.033434    3913 main.go:141] libmachine: STDOUT: 
	I0719 16:41:08.033498    3913 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:41:08.033569    3913 fix.go:56] fixHost completed within 21.435042ms
	I0719 16:41:08.033589    3913 start.go:83] releasing machines lock for "kubernetes-upgrade-302000", held for 21.596959ms
	W0719 16:41:08.033838    3913 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-302000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-302000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:41:08.041281    3913 out.go:177] 
	W0719 16:41:08.045291    3913 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:41:08.045318    3913 out.go:239] * 
	* 
	W0719 16:41:08.048014    3913 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:41:08.055263    3913 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:257: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-302000 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-302000 version --output=json
version_upgrade_test.go:260: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-302000 version --output=json: exit status 1 (64.103042ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-302000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:262: error running kubectl: exit status 1
panic.go:522: *** TestKubernetesUpgrade FAILED at 2023-07-19 16:41:08.133441 -0700 PDT m=+2996.162790460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-302000 -n kubernetes-upgrade-302000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-302000 -n kubernetes-upgrade-302000: exit status 7 (33.409542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-302000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-302000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-302000
--- FAIL: TestKubernetesUpgrade (15.29s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.4s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.31.0 on darwin (arm64)
- MINIKUBE_LOCATION=15585
- KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2458256204/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.40s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.13s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.31.0 on darwin (arm64)
- MINIKUBE_LOCATION=15585
- KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current697046653/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.13s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (171.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
E0719 16:41:02.221339    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/client.crt: no such file or directory
version_upgrade_test.go:167: v1.6.2 release installation failed: bad response code: 404
--- FAIL: TestStoppedBinaryUpgrade/Setup (171.71s)

                                                
                                    
x
+
TestPause/serial/Start (9.77s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-678000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-678000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.699559084s)

                                                
                                                
-- stdout --
	* [pause-678000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node pause-678000 in cluster pause-678000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-678000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-678000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-678000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-678000 -n pause-678000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-678000 -n pause-678000: exit status 7 (69.801084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-678000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-815000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-815000 --driver=qemu2 : exit status 80 (9.779776916s)

                                                
                                                
-- stdout --
	* [NoKubernetes-815000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node NoKubernetes-815000 in cluster NoKubernetes-815000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-815000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-815000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-815000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-815000 -n NoKubernetes-815000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-815000 -n NoKubernetes-815000: exit status 7 (70.316333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-815000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-815000 --no-kubernetes --driver=qemu2 
E0719 16:41:29.934513    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/ingress-addon-legacy-442000/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-815000 --no-kubernetes --driver=qemu2 : exit status 80 (5.401381541s)

                                                
                                                
-- stdout --
	* [NoKubernetes-815000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-815000
	* Restarting existing qemu2 VM for "NoKubernetes-815000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-815000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-815000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-815000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-815000 -n NoKubernetes-815000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-815000 -n NoKubernetes-815000: exit status 7 (69.717375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-815000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-815000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-815000 --no-kubernetes --driver=qemu2 : exit status 80 (5.401126084s)

                                                
                                                
-- stdout --
	* [NoKubernetes-815000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-815000
	* Restarting existing qemu2 VM for "NoKubernetes-815000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-815000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-815000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-815000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-815000 -n NoKubernetes-815000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-815000 -n NoKubernetes-815000: exit status 7 (67.056917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-815000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-815000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-815000 --driver=qemu2 : exit status 80 (5.396436375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-815000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-815000
	* Restarting existing qemu2 VM for "NoKubernetes-815000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-815000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-815000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-815000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-815000 -n NoKubernetes-815000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-815000 -n NoKubernetes-815000: exit status 7 (63.892708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-815000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-318000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-318000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.774216708s)

                                                
                                                
-- stdout --
	* [auto-318000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node auto-318000 in cluster auto-318000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-318000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:41:44.897557    4019 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:41:44.897696    4019 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:41:44.897699    4019 out.go:309] Setting ErrFile to fd 2...
	I0719 16:41:44.897701    4019 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:41:44.897808    4019 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:41:44.898839    4019 out.go:303] Setting JSON to false
	I0719 16:41:44.914177    4019 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4275,"bootTime":1689805829,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:41:44.914240    4019 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:41:44.918283    4019 out.go:177] * [auto-318000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:41:44.925197    4019 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:41:44.925270    4019 notify.go:220] Checking for updates...
	I0719 16:41:44.932204    4019 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:41:44.935128    4019 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:41:44.938192    4019 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:41:44.941191    4019 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:41:44.942563    4019 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:41:44.945477    4019 config.go:182] Loaded profile config "multinode-992000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:41:44.945517    4019 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:41:44.949181    4019 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 16:41:44.954183    4019 start.go:298] selected driver: qemu2
	I0719 16:41:44.954187    4019 start.go:880] validating driver "qemu2" against <nil>
	I0719 16:41:44.954193    4019 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:41:44.955956    4019 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0719 16:41:44.959205    4019 out.go:177] * Automatically selected the socket_vmnet network
	I0719 16:41:44.962289    4019 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 16:41:44.962310    4019 cni.go:84] Creating CNI manager for ""
	I0719 16:41:44.962318    4019 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 16:41:44.962322    4019 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 16:41:44.962329    4019 start_flags.go:319] config:
	{Name:auto-318000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:auto-318000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni F
eatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:41:44.966383    4019 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:41:44.973213    4019 out.go:177] * Starting control plane node auto-318000 in cluster auto-318000
	I0719 16:41:44.977128    4019 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 16:41:44.977152    4019 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 16:41:44.977163    4019 cache.go:57] Caching tarball of preloaded images
	I0719 16:41:44.977227    4019 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 16:41:44.977233    4019 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0719 16:41:44.977294    4019 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/auto-318000/config.json ...
	I0719 16:41:44.977307    4019 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/auto-318000/config.json: {Name:mkebe1ce2260002ac3bbdbf77670bba0fc56ed88 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:41:44.977503    4019 start.go:365] acquiring machines lock for auto-318000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:41:44.977532    4019 start.go:369] acquired machines lock for "auto-318000" in 23.792µs
	I0719 16:41:44.977543    4019 start.go:93] Provisioning new machine with config: &{Name:auto-318000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:auto-318000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:41:44.977569    4019 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:41:44.986205    4019 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 16:41:45.002237    4019 start.go:159] libmachine.API.Create for "auto-318000" (driver="qemu2")
	I0719 16:41:45.002267    4019 client.go:168] LocalClient.Create starting
	I0719 16:41:45.002333    4019 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:41:45.002360    4019 main.go:141] libmachine: Decoding PEM data...
	I0719 16:41:45.002379    4019 main.go:141] libmachine: Parsing certificate...
	I0719 16:41:45.002431    4019 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:41:45.002447    4019 main.go:141] libmachine: Decoding PEM data...
	I0719 16:41:45.002453    4019 main.go:141] libmachine: Parsing certificate...
	I0719 16:41:45.002999    4019 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:41:45.118304    4019 main.go:141] libmachine: Creating SSH key...
	I0719 16:41:45.318537    4019 main.go:141] libmachine: Creating Disk image...
	I0719 16:41:45.318544    4019 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:41:45.318708    4019 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/auto-318000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/auto-318000/disk.qcow2
	I0719 16:41:45.327786    4019 main.go:141] libmachine: STDOUT: 
	I0719 16:41:45.327800    4019 main.go:141] libmachine: STDERR: 
	I0719 16:41:45.327854    4019 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/auto-318000/disk.qcow2 +20000M
	I0719 16:41:45.335097    4019 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:41:45.335118    4019 main.go:141] libmachine: STDERR: 
	I0719 16:41:45.335138    4019 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/auto-318000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/auto-318000/disk.qcow2
	I0719 16:41:45.335143    4019 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:41:45.335190    4019 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/auto-318000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/auto-318000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/auto-318000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:95:39:cb:be:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/auto-318000/disk.qcow2
	I0719 16:41:45.336697    4019 main.go:141] libmachine: STDOUT: 
	I0719 16:41:45.336708    4019 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:41:45.336741    4019 client.go:171] LocalClient.Create took 334.466542ms
	I0719 16:41:47.338900    4019 start.go:128] duration metric: createHost completed in 2.361298084s
	I0719 16:41:47.338976    4019 start.go:83] releasing machines lock for "auto-318000", held for 2.361418625s
	W0719 16:41:47.339066    4019 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:41:47.347561    4019 out.go:177] * Deleting "auto-318000" in qemu2 ...
	W0719 16:41:47.366659    4019 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:41:47.366692    4019 start.go:687] Will try again in 5 seconds ...
	I0719 16:41:52.368958    4019 start.go:365] acquiring machines lock for auto-318000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:41:52.369422    4019 start.go:369] acquired machines lock for "auto-318000" in 385.5µs
	I0719 16:41:52.369539    4019 start.go:93] Provisioning new machine with config: &{Name:auto-318000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:auto-318000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:41:52.369901    4019 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:41:52.379476    4019 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 16:41:52.427029    4019 start.go:159] libmachine.API.Create for "auto-318000" (driver="qemu2")
	I0719 16:41:52.427066    4019 client.go:168] LocalClient.Create starting
	I0719 16:41:52.427201    4019 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:41:52.427264    4019 main.go:141] libmachine: Decoding PEM data...
	I0719 16:41:52.427288    4019 main.go:141] libmachine: Parsing certificate...
	I0719 16:41:52.427382    4019 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:41:52.427410    4019 main.go:141] libmachine: Decoding PEM data...
	I0719 16:41:52.427426    4019 main.go:141] libmachine: Parsing certificate...
	I0719 16:41:52.427980    4019 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:41:52.555858    4019 main.go:141] libmachine: Creating SSH key...
	I0719 16:41:52.585335    4019 main.go:141] libmachine: Creating Disk image...
	I0719 16:41:52.585340    4019 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:41:52.585472    4019 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/auto-318000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/auto-318000/disk.qcow2
	I0719 16:41:52.593837    4019 main.go:141] libmachine: STDOUT: 
	I0719 16:41:52.593853    4019 main.go:141] libmachine: STDERR: 
	I0719 16:41:52.593902    4019 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/auto-318000/disk.qcow2 +20000M
	I0719 16:41:52.601140    4019 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:41:52.601160    4019 main.go:141] libmachine: STDERR: 
	I0719 16:41:52.601175    4019 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/auto-318000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/auto-318000/disk.qcow2
	I0719 16:41:52.601180    4019 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:41:52.601215    4019 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/auto-318000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/auto-318000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/auto-318000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:6d:90:29:bb:82 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/auto-318000/disk.qcow2
	I0719 16:41:52.602747    4019 main.go:141] libmachine: STDOUT: 
	I0719 16:41:52.602760    4019 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:41:52.602778    4019 client.go:171] LocalClient.Create took 175.7075ms
	I0719 16:41:54.604971    4019 start.go:128] duration metric: createHost completed in 2.235000334s
	I0719 16:41:54.605063    4019 start.go:83] releasing machines lock for "auto-318000", held for 2.235612959s
	W0719 16:41:54.605480    4019 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-318000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-318000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:41:54.616074    4019 out.go:177] 
	W0719 16:41:54.620167    4019 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:41:54.620188    4019 out.go:239] * 
	* 
	W0719 16:41:54.623011    4019 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:41:54.631115    4019 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (10.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-318000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-318000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (10.057119917s)

                                                
                                                
-- stdout --
	* [kindnet-318000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kindnet-318000 in cluster kindnet-318000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-318000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:41:56.739339    4129 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:41:56.739457    4129 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:41:56.739460    4129 out.go:309] Setting ErrFile to fd 2...
	I0719 16:41:56.739462    4129 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:41:56.739572    4129 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:41:56.740691    4129 out.go:303] Setting JSON to false
	I0719 16:41:56.755854    4129 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4287,"bootTime":1689805829,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:41:56.755929    4129 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:41:56.761345    4129 out.go:177] * [kindnet-318000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:41:56.765337    4129 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:41:56.765378    4129 notify.go:220] Checking for updates...
	I0719 16:41:56.768235    4129 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:41:56.772336    4129 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:41:56.775328    4129 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:41:56.778297    4129 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:41:56.781327    4129 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:41:56.785086    4129 config.go:182] Loaded profile config "multinode-992000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:41:56.785146    4129 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:41:56.788280    4129 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 16:41:56.795362    4129 start.go:298] selected driver: qemu2
	I0719 16:41:56.795367    4129 start.go:880] validating driver "qemu2" against <nil>
	I0719 16:41:56.795374    4129 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:41:56.797347    4129 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0719 16:41:56.798820    4129 out.go:177] * Automatically selected the socket_vmnet network
	I0719 16:41:56.802409    4129 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 16:41:56.802429    4129 cni.go:84] Creating CNI manager for "kindnet"
	I0719 16:41:56.802433    4129 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0719 16:41:56.802439    4129 start_flags.go:319] config:
	{Name:kindnet-318000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kindnet-318000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:41:56.806621    4129 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:41:56.813243    4129 out.go:177] * Starting control plane node kindnet-318000 in cluster kindnet-318000
	I0719 16:41:56.828376    4129 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 16:41:56.828403    4129 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 16:41:56.828417    4129 cache.go:57] Caching tarball of preloaded images
	I0719 16:41:56.828468    4129 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 16:41:56.828474    4129 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0719 16:41:56.828562    4129 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/kindnet-318000/config.json ...
	I0719 16:41:56.828581    4129 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/kindnet-318000/config.json: {Name:mkdc354513fc2ad85c4093bbdeeaa1848ff4f0fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:41:56.828826    4129 start.go:365] acquiring machines lock for kindnet-318000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:41:56.828861    4129 start.go:369] acquired machines lock for "kindnet-318000" in 28.708µs
	I0719 16:41:56.828874    4129 start.go:93] Provisioning new machine with config: &{Name:kindnet-318000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kindnet-3180
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:41:56.828906    4129 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:41:56.837302    4129 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 16:41:56.853966    4129 start.go:159] libmachine.API.Create for "kindnet-318000" (driver="qemu2")
	I0719 16:41:56.853990    4129 client.go:168] LocalClient.Create starting
	I0719 16:41:56.854049    4129 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:41:56.854069    4129 main.go:141] libmachine: Decoding PEM data...
	I0719 16:41:56.854082    4129 main.go:141] libmachine: Parsing certificate...
	I0719 16:41:56.854133    4129 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:41:56.854147    4129 main.go:141] libmachine: Decoding PEM data...
	I0719 16:41:56.854158    4129 main.go:141] libmachine: Parsing certificate...
	I0719 16:41:56.854537    4129 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:41:56.969235    4129 main.go:141] libmachine: Creating SSH key...
	I0719 16:41:57.238052    4129 main.go:141] libmachine: Creating Disk image...
	I0719 16:41:57.238061    4129 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:41:57.238280    4129 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kindnet-318000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kindnet-318000/disk.qcow2
	I0719 16:41:57.247426    4129 main.go:141] libmachine: STDOUT: 
	I0719 16:41:57.247440    4129 main.go:141] libmachine: STDERR: 
	I0719 16:41:57.247520    4129 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kindnet-318000/disk.qcow2 +20000M
	I0719 16:41:57.254711    4129 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:41:57.254723    4129 main.go:141] libmachine: STDERR: 
	I0719 16:41:57.254736    4129 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kindnet-318000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kindnet-318000/disk.qcow2
	I0719 16:41:57.254742    4129 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:41:57.254776    4129 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kindnet-318000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kindnet-318000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kindnet-318000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:42:18:6e:15:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kindnet-318000/disk.qcow2
	I0719 16:41:57.256243    4129 main.go:141] libmachine: STDOUT: 
	I0719 16:41:57.256257    4129 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:41:57.256275    4129 client.go:171] LocalClient.Create took 402.281709ms
	I0719 16:41:59.258447    4129 start.go:128] duration metric: createHost completed in 2.429530708s
	I0719 16:41:59.258509    4129 start.go:83] releasing machines lock for "kindnet-318000", held for 2.429644708s
	W0719 16:41:59.258595    4129 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:41:59.269685    4129 out.go:177] * Deleting "kindnet-318000" in qemu2 ...
	W0719 16:41:59.290601    4129 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:41:59.290626    4129 start.go:687] Will try again in 5 seconds ...
	I0719 16:42:04.292828    4129 start.go:365] acquiring machines lock for kindnet-318000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:42:04.293479    4129 start.go:369] acquired machines lock for "kindnet-318000" in 516.875µs
	I0719 16:42:04.293603    4129 start.go:93] Provisioning new machine with config: &{Name:kindnet-318000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kindnet-3180
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:42:04.293882    4129 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:42:04.303529    4129 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 16:42:04.353646    4129 start.go:159] libmachine.API.Create for "kindnet-318000" (driver="qemu2")
	I0719 16:42:04.353686    4129 client.go:168] LocalClient.Create starting
	I0719 16:42:04.353850    4129 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:42:04.353897    4129 main.go:141] libmachine: Decoding PEM data...
	I0719 16:42:04.353919    4129 main.go:141] libmachine: Parsing certificate...
	I0719 16:42:04.354011    4129 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:42:04.354041    4129 main.go:141] libmachine: Decoding PEM data...
	I0719 16:42:04.354085    4129 main.go:141] libmachine: Parsing certificate...
	I0719 16:42:04.354681    4129 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:42:04.482144    4129 main.go:141] libmachine: Creating SSH key...
	I0719 16:42:04.704596    4129 main.go:141] libmachine: Creating Disk image...
	I0719 16:42:04.704607    4129 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:42:04.704794    4129 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kindnet-318000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kindnet-318000/disk.qcow2
	I0719 16:42:04.713949    4129 main.go:141] libmachine: STDOUT: 
	I0719 16:42:04.713966    4129 main.go:141] libmachine: STDERR: 
	I0719 16:42:04.714037    4129 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kindnet-318000/disk.qcow2 +20000M
	I0719 16:42:04.721271    4129 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:42:04.721284    4129 main.go:141] libmachine: STDERR: 
	I0719 16:42:04.721304    4129 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kindnet-318000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kindnet-318000/disk.qcow2
	I0719 16:42:04.721316    4129 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:42:04.721356    4129 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kindnet-318000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kindnet-318000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kindnet-318000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:37:05:88:2d:d4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kindnet-318000/disk.qcow2
	I0719 16:42:04.722890    4129 main.go:141] libmachine: STDOUT: 
	I0719 16:42:04.722902    4129 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:42:04.722915    4129 client.go:171] LocalClient.Create took 369.226917ms
	I0719 16:42:06.725047    4129 start.go:128] duration metric: createHost completed in 2.431133875s
	I0719 16:42:06.725105    4129 start.go:83] releasing machines lock for "kindnet-318000", held for 2.431612625s
	W0719 16:42:06.725472    4129 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-318000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-318000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:42:06.734714    4129 out.go:177] 
	W0719 16:42:06.739231    4129 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:42:06.739258    4129 out.go:239] * 
	* 
	W0719 16:42:06.742052    4129 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:42:06.755207    4129 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (10.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-318000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-318000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.987988125s)

                                                
                                                
-- stdout --
	* [flannel-318000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node flannel-318000 in cluster flannel-318000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-318000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:42:08.972032    4243 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:42:08.972163    4243 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:42:08.972166    4243 out.go:309] Setting ErrFile to fd 2...
	I0719 16:42:08.972168    4243 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:42:08.972294    4243 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:42:08.973358    4243 out.go:303] Setting JSON to false
	I0719 16:42:08.988565    4243 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4299,"bootTime":1689805829,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:42:08.988635    4243 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:42:08.993472    4243 out.go:177] * [flannel-318000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:42:09.001473    4243 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:42:09.001540    4243 notify.go:220] Checking for updates...
	I0719 16:42:09.005431    4243 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:42:09.008436    4243 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:42:09.011465    4243 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:42:09.014410    4243 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:42:09.017433    4243 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:42:09.020805    4243 config.go:182] Loaded profile config "multinode-992000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:42:09.020848    4243 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:42:09.025292    4243 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 16:42:09.032408    4243 start.go:298] selected driver: qemu2
	I0719 16:42:09.032413    4243 start.go:880] validating driver "qemu2" against <nil>
	I0719 16:42:09.032419    4243 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:42:09.034327    4243 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0719 16:42:09.037421    4243 out.go:177] * Automatically selected the socket_vmnet network
	I0719 16:42:09.040538    4243 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 16:42:09.040559    4243 cni.go:84] Creating CNI manager for "flannel"
	I0719 16:42:09.040570    4243 start_flags.go:314] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0719 16:42:09.040577    4243 start_flags.go:319] config:
	{Name:flannel-318000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:flannel-318000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:42:09.044636    4243 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:42:09.051429    4243 out.go:177] * Starting control plane node flannel-318000 in cluster flannel-318000
	I0719 16:42:09.055445    4243 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 16:42:09.055473    4243 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 16:42:09.055485    4243 cache.go:57] Caching tarball of preloaded images
	I0719 16:42:09.055540    4243 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 16:42:09.055548    4243 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0719 16:42:09.055621    4243 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/flannel-318000/config.json ...
	I0719 16:42:09.055638    4243 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/flannel-318000/config.json: {Name:mk69d67e9955d3ee48906525cc955d0f1d2cf0fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:42:09.055828    4243 start.go:365] acquiring machines lock for flannel-318000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:42:09.055857    4243 start.go:369] acquired machines lock for "flannel-318000" in 23.625µs
	I0719 16:42:09.055868    4243 start.go:93] Provisioning new machine with config: &{Name:flannel-318000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:flannel-3180
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:42:09.055896    4243 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:42:09.064396    4243 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 16:42:09.080264    4243 start.go:159] libmachine.API.Create for "flannel-318000" (driver="qemu2")
	I0719 16:42:09.080298    4243 client.go:168] LocalClient.Create starting
	I0719 16:42:09.080353    4243 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:42:09.080379    4243 main.go:141] libmachine: Decoding PEM data...
	I0719 16:42:09.080389    4243 main.go:141] libmachine: Parsing certificate...
	I0719 16:42:09.080437    4243 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:42:09.080451    4243 main.go:141] libmachine: Decoding PEM data...
	I0719 16:42:09.080461    4243 main.go:141] libmachine: Parsing certificate...
	I0719 16:42:09.080821    4243 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:42:09.195090    4243 main.go:141] libmachine: Creating SSH key...
	I0719 16:42:09.516813    4243 main.go:141] libmachine: Creating Disk image...
	I0719 16:42:09.516829    4243 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:42:09.517016    4243 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/flannel-318000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/flannel-318000/disk.qcow2
	I0719 16:42:09.526290    4243 main.go:141] libmachine: STDOUT: 
	I0719 16:42:09.526305    4243 main.go:141] libmachine: STDERR: 
	I0719 16:42:09.526360    4243 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/flannel-318000/disk.qcow2 +20000M
	I0719 16:42:09.533747    4243 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:42:09.533764    4243 main.go:141] libmachine: STDERR: 
	I0719 16:42:09.533784    4243 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/flannel-318000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/flannel-318000/disk.qcow2
	I0719 16:42:09.533799    4243 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:42:09.533836    4243 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/flannel-318000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/flannel-318000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/flannel-318000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:39:b1:75:bf:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/flannel-318000/disk.qcow2
	I0719 16:42:09.535426    4243 main.go:141] libmachine: STDOUT: 
	I0719 16:42:09.535439    4243 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:42:09.535458    4243 client.go:171] LocalClient.Create took 455.158375ms
	I0719 16:42:11.537632    4243 start.go:128] duration metric: createHost completed in 2.48173625s
	I0719 16:42:11.537694    4243 start.go:83] releasing machines lock for "flannel-318000", held for 2.48184475s
	W0719 16:42:11.537774    4243 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:42:11.549037    4243 out.go:177] * Deleting "flannel-318000" in qemu2 ...
	W0719 16:42:11.571599    4243 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:42:11.571627    4243 start.go:687] Will try again in 5 seconds ...
	I0719 16:42:16.573911    4243 start.go:365] acquiring machines lock for flannel-318000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:42:16.574410    4243 start.go:369] acquired machines lock for "flannel-318000" in 376.708µs
	I0719 16:42:16.574533    4243 start.go:93] Provisioning new machine with config: &{Name:flannel-318000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:flannel-3180
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:42:16.574853    4243 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:42:16.583523    4243 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 16:42:16.630336    4243 start.go:159] libmachine.API.Create for "flannel-318000" (driver="qemu2")
	I0719 16:42:16.630390    4243 client.go:168] LocalClient.Create starting
	I0719 16:42:16.630541    4243 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:42:16.630602    4243 main.go:141] libmachine: Decoding PEM data...
	I0719 16:42:16.630620    4243 main.go:141] libmachine: Parsing certificate...
	I0719 16:42:16.630711    4243 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:42:16.630740    4243 main.go:141] libmachine: Decoding PEM data...
	I0719 16:42:16.630753    4243 main.go:141] libmachine: Parsing certificate...
	I0719 16:42:16.631299    4243 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:42:16.758352    4243 main.go:141] libmachine: Creating SSH key...
	I0719 16:42:16.875698    4243 main.go:141] libmachine: Creating Disk image...
	I0719 16:42:16.875707    4243 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:42:16.875850    4243 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/flannel-318000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/flannel-318000/disk.qcow2
	I0719 16:42:16.884270    4243 main.go:141] libmachine: STDOUT: 
	I0719 16:42:16.884286    4243 main.go:141] libmachine: STDERR: 
	I0719 16:42:16.884346    4243 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/flannel-318000/disk.qcow2 +20000M
	I0719 16:42:16.891352    4243 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:42:16.891363    4243 main.go:141] libmachine: STDERR: 
	I0719 16:42:16.891395    4243 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/flannel-318000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/flannel-318000/disk.qcow2
	I0719 16:42:16.891414    4243 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:42:16.891454    4243 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/flannel-318000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/flannel-318000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/flannel-318000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:a0:33:f3:de:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/flannel-318000/disk.qcow2
	I0719 16:42:16.892966    4243 main.go:141] libmachine: STDOUT: 
	I0719 16:42:16.892979    4243 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:42:16.892994    4243 client.go:171] LocalClient.Create took 262.601833ms
	I0719 16:42:18.895135    4243 start.go:128] duration metric: createHost completed in 2.320278333s
	I0719 16:42:18.895198    4243 start.go:83] releasing machines lock for "flannel-318000", held for 2.320783958s
	W0719 16:42:18.895597    4243 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-318000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-318000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:42:18.903217    4243 out.go:177] 
	W0719 16:42:18.907260    4243 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:42:18.907289    4243 out.go:239] * 
	* 
	W0719 16:42:18.910003    4243 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:42:18.919178    4243 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-318000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-318000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.784915959s)

                                                
                                                
-- stdout --
	* [enable-default-cni-318000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node enable-default-cni-318000 in cluster enable-default-cni-318000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-318000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:42:21.236208    4364 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:42:21.236352    4364 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:42:21.236355    4364 out.go:309] Setting ErrFile to fd 2...
	I0719 16:42:21.236358    4364 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:42:21.236474    4364 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:42:21.237542    4364 out.go:303] Setting JSON to false
	I0719 16:42:21.252547    4364 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4312,"bootTime":1689805829,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:42:21.252628    4364 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:42:21.257761    4364 out.go:177] * [enable-default-cni-318000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:42:21.261680    4364 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:42:21.261732    4364 notify.go:220] Checking for updates...
	I0719 16:42:21.265760    4364 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:42:21.269715    4364 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:42:21.272752    4364 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:42:21.275780    4364 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:42:21.278650    4364 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:42:21.282077    4364 config.go:182] Loaded profile config "multinode-992000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:42:21.282120    4364 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:42:21.286648    4364 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 16:42:21.293746    4364 start.go:298] selected driver: qemu2
	I0719 16:42:21.293751    4364 start.go:880] validating driver "qemu2" against <nil>
	I0719 16:42:21.293761    4364 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:42:21.295547    4364 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0719 16:42:21.298719    4364 out.go:177] * Automatically selected the socket_vmnet network
	E0719 16:42:21.301805    4364 start_flags.go:453] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0719 16:42:21.301818    4364 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 16:42:21.301837    4364 cni.go:84] Creating CNI manager for "bridge"
	I0719 16:42:21.301841    4364 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 16:42:21.301854    4364 start_flags.go:319] config:
	{Name:enable-default-cni-318000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:enable-default-cni-318000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0}
	I0719 16:42:21.305889    4364 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:42:21.312712    4364 out.go:177] * Starting control plane node enable-default-cni-318000 in cluster enable-default-cni-318000
	I0719 16:42:21.316660    4364 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 16:42:21.316684    4364 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 16:42:21.316695    4364 cache.go:57] Caching tarball of preloaded images
	I0719 16:42:21.316750    4364 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 16:42:21.316755    4364 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0719 16:42:21.316821    4364 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/enable-default-cni-318000/config.json ...
	I0719 16:42:21.316834    4364 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/enable-default-cni-318000/config.json: {Name:mk205c334baabe7cba02f0b00a5f1ef708ca25c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:42:21.317029    4364 start.go:365] acquiring machines lock for enable-default-cni-318000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:42:21.317059    4364 start.go:369] acquired machines lock for "enable-default-cni-318000" in 23.625µs
	I0719 16:42:21.317070    4364 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-318000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:e
nable-default-cni-318000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:42:21.317096    4364 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:42:21.324734    4364 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 16:42:21.341272    4364 start.go:159] libmachine.API.Create for "enable-default-cni-318000" (driver="qemu2")
	I0719 16:42:21.341330    4364 client.go:168] LocalClient.Create starting
	I0719 16:42:21.341570    4364 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:42:21.341613    4364 main.go:141] libmachine: Decoding PEM data...
	I0719 16:42:21.341639    4364 main.go:141] libmachine: Parsing certificate...
	I0719 16:42:21.341871    4364 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:42:21.341905    4364 main.go:141] libmachine: Decoding PEM data...
	I0719 16:42:21.341918    4364 main.go:141] libmachine: Parsing certificate...
	I0719 16:42:21.342335    4364 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:42:21.457330    4364 main.go:141] libmachine: Creating SSH key...
	I0719 16:42:21.559237    4364 main.go:141] libmachine: Creating Disk image...
	I0719 16:42:21.559243    4364 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:42:21.559393    4364 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/enable-default-cni-318000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/enable-default-cni-318000/disk.qcow2
	I0719 16:42:21.568234    4364 main.go:141] libmachine: STDOUT: 
	I0719 16:42:21.568248    4364 main.go:141] libmachine: STDERR: 
	I0719 16:42:21.568292    4364 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/enable-default-cni-318000/disk.qcow2 +20000M
	I0719 16:42:21.575403    4364 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:42:21.575417    4364 main.go:141] libmachine: STDERR: 
	I0719 16:42:21.575430    4364 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/enable-default-cni-318000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/enable-default-cni-318000/disk.qcow2
	I0719 16:42:21.575440    4364 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:42:21.575483    4364 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/enable-default-cni-318000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/enable-default-cni-318000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/enable-default-cni-318000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:04:6c:ad:4e:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/enable-default-cni-318000/disk.qcow2
	I0719 16:42:21.577045    4364 main.go:141] libmachine: STDOUT: 
	I0719 16:42:21.577057    4364 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:42:21.577074    4364 client.go:171] LocalClient.Create took 235.739583ms
	I0719 16:42:23.579216    4364 start.go:128] duration metric: createHost completed in 2.262121708s
	I0719 16:42:23.579314    4364 start.go:83] releasing machines lock for "enable-default-cni-318000", held for 2.262265292s
	W0719 16:42:23.579449    4364 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:42:23.586847    4364 out.go:177] * Deleting "enable-default-cni-318000" in qemu2 ...
	W0719 16:42:23.607444    4364 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:42:23.607478    4364 start.go:687] Will try again in 5 seconds ...
	I0719 16:42:28.609632    4364 start.go:365] acquiring machines lock for enable-default-cni-318000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:42:28.610300    4364 start.go:369] acquired machines lock for "enable-default-cni-318000" in 536.5µs
	I0719 16:42:28.610415    4364 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-318000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:e
nable-default-cni-318000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:42:28.610849    4364 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:42:28.618768    4364 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 16:42:28.667649    4364 start.go:159] libmachine.API.Create for "enable-default-cni-318000" (driver="qemu2")
	I0719 16:42:28.667695    4364 client.go:168] LocalClient.Create starting
	I0719 16:42:28.667891    4364 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:42:28.667953    4364 main.go:141] libmachine: Decoding PEM data...
	I0719 16:42:28.667972    4364 main.go:141] libmachine: Parsing certificate...
	I0719 16:42:28.668079    4364 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:42:28.668112    4364 main.go:141] libmachine: Decoding PEM data...
	I0719 16:42:28.668124    4364 main.go:141] libmachine: Parsing certificate...
	I0719 16:42:28.668681    4364 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:42:28.796240    4364 main.go:141] libmachine: Creating SSH key...
	I0719 16:42:28.934106    4364 main.go:141] libmachine: Creating Disk image...
	I0719 16:42:28.934113    4364 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:42:28.934298    4364 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/enable-default-cni-318000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/enable-default-cni-318000/disk.qcow2
	I0719 16:42:28.943080    4364 main.go:141] libmachine: STDOUT: 
	I0719 16:42:28.943097    4364 main.go:141] libmachine: STDERR: 
	I0719 16:42:28.943149    4364 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/enable-default-cni-318000/disk.qcow2 +20000M
	I0719 16:42:28.950325    4364 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:42:28.950337    4364 main.go:141] libmachine: STDERR: 
	I0719 16:42:28.950349    4364 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/enable-default-cni-318000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/enable-default-cni-318000/disk.qcow2
	I0719 16:42:28.950353    4364 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:42:28.950394    4364 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/enable-default-cni-318000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/enable-default-cni-318000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/enable-default-cni-318000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:68:c0:bc:50:00 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/enable-default-cni-318000/disk.qcow2
	I0719 16:42:28.951938    4364 main.go:141] libmachine: STDOUT: 
	I0719 16:42:28.951953    4364 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:42:28.951966    4364 client.go:171] LocalClient.Create took 284.268958ms
	I0719 16:42:30.954105    4364 start.go:128] duration metric: createHost completed in 2.343256833s
	I0719 16:42:30.954163    4364 start.go:83] releasing machines lock for "enable-default-cni-318000", held for 2.343860417s
	W0719 16:42:30.954480    4364 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-318000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-318000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:42:30.964243    4364 out.go:177] 
	W0719 16:42:30.968280    4364 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:42:30.968304    4364 out.go:239] * 
	* 
	W0719 16:42:30.971211    4364 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:42:30.981191    4364 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-318000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-318000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.6267365s)

                                                
                                                
-- stdout --
	* [bridge-318000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node bridge-318000 in cluster bridge-318000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-318000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:42:33.155717    4474 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:42:33.155844    4474 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:42:33.155849    4474 out.go:309] Setting ErrFile to fd 2...
	I0719 16:42:33.155852    4474 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:42:33.155955    4474 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:42:33.156960    4474 out.go:303] Setting JSON to false
	I0719 16:42:33.172344    4474 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4324,"bootTime":1689805829,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:42:33.172418    4474 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:42:33.177720    4474 out.go:177] * [bridge-318000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:42:33.185700    4474 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:42:33.189659    4474 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:42:33.185745    4474 notify.go:220] Checking for updates...
	I0719 16:42:33.195635    4474 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:42:33.198707    4474 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:42:33.201757    4474 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:42:33.204634    4474 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:42:33.208013    4474 config.go:182] Loaded profile config "multinode-992000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:42:33.208053    4474 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:42:33.212733    4474 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 16:42:33.219668    4474 start.go:298] selected driver: qemu2
	I0719 16:42:33.219672    4474 start.go:880] validating driver "qemu2" against <nil>
	I0719 16:42:33.219683    4474 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:42:33.221499    4474 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0719 16:42:33.224710    4474 out.go:177] * Automatically selected the socket_vmnet network
	I0719 16:42:33.227768    4474 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 16:42:33.227789    4474 cni.go:84] Creating CNI manager for "bridge"
	I0719 16:42:33.227792    4474 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 16:42:33.227799    4474 start_flags.go:319] config:
	{Name:bridge-318000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:bridge-318000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:c
ni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:42:33.231859    4474 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:42:33.235744    4474 out.go:177] * Starting control plane node bridge-318000 in cluster bridge-318000
	I0719 16:42:33.243713    4474 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 16:42:33.243755    4474 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 16:42:33.243765    4474 cache.go:57] Caching tarball of preloaded images
	I0719 16:42:33.243839    4474 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 16:42:33.243846    4474 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0719 16:42:33.243914    4474 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/bridge-318000/config.json ...
	I0719 16:42:33.243927    4474 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/bridge-318000/config.json: {Name:mk23823111b7089c6bd5076e190fdfbf146422c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:42:33.244138    4474 start.go:365] acquiring machines lock for bridge-318000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:42:33.244170    4474 start.go:369] acquired machines lock for "bridge-318000" in 25.792µs
	I0719 16:42:33.244182    4474 start.go:93] Provisioning new machine with config: &{Name:bridge-318000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:bridge-318000
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:42:33.244214    4474 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:42:33.248751    4474 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 16:42:33.264893    4474 start.go:159] libmachine.API.Create for "bridge-318000" (driver="qemu2")
	I0719 16:42:33.264915    4474 client.go:168] LocalClient.Create starting
	I0719 16:42:33.264978    4474 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:42:33.264999    4474 main.go:141] libmachine: Decoding PEM data...
	I0719 16:42:33.265011    4474 main.go:141] libmachine: Parsing certificate...
	I0719 16:42:33.265061    4474 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:42:33.265075    4474 main.go:141] libmachine: Decoding PEM data...
	I0719 16:42:33.265086    4474 main.go:141] libmachine: Parsing certificate...
	I0719 16:42:33.265422    4474 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:42:33.380439    4474 main.go:141] libmachine: Creating SSH key...
	I0719 16:42:33.419232    4474 main.go:141] libmachine: Creating Disk image...
	I0719 16:42:33.419240    4474 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:42:33.419416    4474 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/bridge-318000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/bridge-318000/disk.qcow2
	I0719 16:42:33.428425    4474 main.go:141] libmachine: STDOUT: 
	I0719 16:42:33.428442    4474 main.go:141] libmachine: STDERR: 
	I0719 16:42:33.428510    4474 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/bridge-318000/disk.qcow2 +20000M
	I0719 16:42:33.436079    4474 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:42:33.436097    4474 main.go:141] libmachine: STDERR: 
	I0719 16:42:33.436114    4474 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/bridge-318000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/bridge-318000/disk.qcow2
	I0719 16:42:33.436119    4474 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:42:33.436165    4474 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/bridge-318000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/bridge-318000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/bridge-318000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:f5:a9:c8:10:4d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/bridge-318000/disk.qcow2
	I0719 16:42:33.437775    4474 main.go:141] libmachine: STDOUT: 
	I0719 16:42:33.437799    4474 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:42:33.437821    4474 client.go:171] LocalClient.Create took 172.903584ms
	I0719 16:42:35.439958    4474 start.go:128] duration metric: createHost completed in 2.195749625s
	I0719 16:42:35.440028    4474 start.go:83] releasing machines lock for "bridge-318000", held for 2.195870333s
	W0719 16:42:35.440142    4474 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:42:35.447249    4474 out.go:177] * Deleting "bridge-318000" in qemu2 ...
	W0719 16:42:35.472037    4474 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:42:35.472066    4474 start.go:687] Will try again in 5 seconds ...
	I0719 16:42:40.474190    4474 start.go:365] acquiring machines lock for bridge-318000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:42:40.474678    4474 start.go:369] acquired machines lock for "bridge-318000" in 391.833µs
	I0719 16:42:40.474791    4474 start.go:93] Provisioning new machine with config: &{Name:bridge-318000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:bridge-318000
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:42:40.475081    4474 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:42:40.486798    4474 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 16:42:40.533398    4474 start.go:159] libmachine.API.Create for "bridge-318000" (driver="qemu2")
	I0719 16:42:40.533436    4474 client.go:168] LocalClient.Create starting
	I0719 16:42:40.533580    4474 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:42:40.533681    4474 main.go:141] libmachine: Decoding PEM data...
	I0719 16:42:40.533701    4474 main.go:141] libmachine: Parsing certificate...
	I0719 16:42:40.533772    4474 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:42:40.533801    4474 main.go:141] libmachine: Decoding PEM data...
	I0719 16:42:40.533811    4474 main.go:141] libmachine: Parsing certificate...
	I0719 16:42:40.534324    4474 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:42:40.662246    4474 main.go:141] libmachine: Creating SSH key...
	I0719 16:42:40.695543    4474 main.go:141] libmachine: Creating Disk image...
	I0719 16:42:40.695548    4474 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:42:40.695698    4474 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/bridge-318000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/bridge-318000/disk.qcow2
	I0719 16:42:40.704042    4474 main.go:141] libmachine: STDOUT: 
	I0719 16:42:40.704056    4474 main.go:141] libmachine: STDERR: 
	I0719 16:42:40.704123    4474 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/bridge-318000/disk.qcow2 +20000M
	I0719 16:42:40.711209    4474 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:42:40.711222    4474 main.go:141] libmachine: STDERR: 
	I0719 16:42:40.711234    4474 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/bridge-318000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/bridge-318000/disk.qcow2
	I0719 16:42:40.711237    4474 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:42:40.711271    4474 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/bridge-318000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/bridge-318000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/bridge-318000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:6c:46:10:cd:8f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/bridge-318000/disk.qcow2
	I0719 16:42:40.712713    4474 main.go:141] libmachine: STDOUT: 
	I0719 16:42:40.712737    4474 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:42:40.712748    4474 client.go:171] LocalClient.Create took 179.309375ms
	I0719 16:42:42.714906    4474 start.go:128] duration metric: createHost completed in 2.239825333s
	I0719 16:42:42.715050    4474 start.go:83] releasing machines lock for "bridge-318000", held for 2.240292833s
	W0719 16:42:42.715414    4474 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-318000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-318000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:42:42.725057    4474 out.go:177] 
	W0719 16:42:42.728927    4474 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:42:42.728974    4474 out.go:239] * 
	* 
	W0719 16:42:42.731377    4474 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:42:42.742030    4474 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-318000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-318000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.843739667s)

                                                
                                                
-- stdout --
	* [kubenet-318000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubenet-318000 in cluster kubenet-318000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-318000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:42:44.921720    4584 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:42:44.921847    4584 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:42:44.921850    4584 out.go:309] Setting ErrFile to fd 2...
	I0719 16:42:44.921852    4584 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:42:44.921952    4584 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:42:44.922939    4584 out.go:303] Setting JSON to false
	I0719 16:42:44.938251    4584 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4335,"bootTime":1689805829,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:42:44.938333    4584 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:42:44.943795    4584 out.go:177] * [kubenet-318000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:42:44.951749    4584 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:42:44.951798    4584 notify.go:220] Checking for updates...
	I0719 16:42:44.958795    4584 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:42:44.961774    4584 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:42:44.964767    4584 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:42:44.967783    4584 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:42:44.970771    4584 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:42:44.974138    4584 config.go:182] Loaded profile config "multinode-992000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:42:44.974185    4584 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:42:44.978671    4584 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 16:42:44.985731    4584 start.go:298] selected driver: qemu2
	I0719 16:42:44.985736    4584 start.go:880] validating driver "qemu2" against <nil>
	I0719 16:42:44.985742    4584 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:42:44.987615    4584 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0719 16:42:44.990674    4584 out.go:177] * Automatically selected the socket_vmnet network
	I0719 16:42:44.993865    4584 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 16:42:44.993892    4584 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0719 16:42:44.993896    4584 start_flags.go:319] config:
	{Name:kubenet-318000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kubenet-318000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:42:44.997972    4584 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:42:45.004787    4584 out.go:177] * Starting control plane node kubenet-318000 in cluster kubenet-318000
	I0719 16:42:45.008843    4584 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 16:42:45.008872    4584 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 16:42:45.008883    4584 cache.go:57] Caching tarball of preloaded images
	I0719 16:42:45.008960    4584 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 16:42:45.008966    4584 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0719 16:42:45.009029    4584 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/kubenet-318000/config.json ...
	I0719 16:42:45.009042    4584 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/kubenet-318000/config.json: {Name:mk4e6a416ca1b619a140b2c95e491f0847f634c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:42:45.009258    4584 start.go:365] acquiring machines lock for kubenet-318000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:42:45.009289    4584 start.go:369] acquired machines lock for "kubenet-318000" in 25.292µs
	I0719 16:42:45.009300    4584 start.go:93] Provisioning new machine with config: &{Name:kubenet-318000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kubenet-3180
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:42:45.009332    4584 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:42:45.017776    4584 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 16:42:45.034602    4584 start.go:159] libmachine.API.Create for "kubenet-318000" (driver="qemu2")
	I0719 16:42:45.034625    4584 client.go:168] LocalClient.Create starting
	I0719 16:42:45.034693    4584 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:42:45.034716    4584 main.go:141] libmachine: Decoding PEM data...
	I0719 16:42:45.034725    4584 main.go:141] libmachine: Parsing certificate...
	I0719 16:42:45.034762    4584 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:42:45.034782    4584 main.go:141] libmachine: Decoding PEM data...
	I0719 16:42:45.034790    4584 main.go:141] libmachine: Parsing certificate...
	I0719 16:42:45.035153    4584 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:42:45.151057    4584 main.go:141] libmachine: Creating SSH key...
	I0719 16:42:45.284262    4584 main.go:141] libmachine: Creating Disk image...
	I0719 16:42:45.284270    4584 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:42:45.284417    4584 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubenet-318000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubenet-318000/disk.qcow2
	I0719 16:42:45.292854    4584 main.go:141] libmachine: STDOUT: 
	I0719 16:42:45.292869    4584 main.go:141] libmachine: STDERR: 
	I0719 16:42:45.292923    4584 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubenet-318000/disk.qcow2 +20000M
	I0719 16:42:45.300051    4584 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:42:45.300063    4584 main.go:141] libmachine: STDERR: 
	I0719 16:42:45.300080    4584 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubenet-318000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubenet-318000/disk.qcow2
	I0719 16:42:45.300087    4584 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:42:45.300130    4584 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubenet-318000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubenet-318000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubenet-318000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:cf:64:16:83:4e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubenet-318000/disk.qcow2
	I0719 16:42:45.301654    4584 main.go:141] libmachine: STDOUT: 
	I0719 16:42:45.301668    4584 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:42:45.301685    4584 client.go:171] LocalClient.Create took 267.060041ms
	I0719 16:42:47.303815    4584 start.go:128] duration metric: createHost completed in 2.294490167s
	I0719 16:42:47.303882    4584 start.go:83] releasing machines lock for "kubenet-318000", held for 2.294607s
	W0719 16:42:47.303961    4584 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:42:47.316174    4584 out.go:177] * Deleting "kubenet-318000" in qemu2 ...
	W0719 16:42:47.335980    4584 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:42:47.336003    4584 start.go:687] Will try again in 5 seconds ...
	I0719 16:42:52.338166    4584 start.go:365] acquiring machines lock for kubenet-318000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:42:52.338718    4584 start.go:369] acquired machines lock for "kubenet-318000" in 456.875µs
	I0719 16:42:52.338837    4584 start.go:93] Provisioning new machine with config: &{Name:kubenet-318000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kubenet-3180
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:42:52.339143    4584 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:42:52.350816    4584 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 16:42:52.397707    4584 start.go:159] libmachine.API.Create for "kubenet-318000" (driver="qemu2")
	I0719 16:42:52.397742    4584 client.go:168] LocalClient.Create starting
	I0719 16:42:52.397886    4584 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:42:52.397933    4584 main.go:141] libmachine: Decoding PEM data...
	I0719 16:42:52.397956    4584 main.go:141] libmachine: Parsing certificate...
	I0719 16:42:52.398062    4584 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:42:52.398092    4584 main.go:141] libmachine: Decoding PEM data...
	I0719 16:42:52.398105    4584 main.go:141] libmachine: Parsing certificate...
	I0719 16:42:52.398706    4584 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:42:52.526773    4584 main.go:141] libmachine: Creating SSH key...
	I0719 16:42:52.677385    4584 main.go:141] libmachine: Creating Disk image...
	I0719 16:42:52.677391    4584 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:42:52.677551    4584 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubenet-318000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubenet-318000/disk.qcow2
	I0719 16:42:52.686222    4584 main.go:141] libmachine: STDOUT: 
	I0719 16:42:52.686240    4584 main.go:141] libmachine: STDERR: 
	I0719 16:42:52.686308    4584 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubenet-318000/disk.qcow2 +20000M
	I0719 16:42:52.693423    4584 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:42:52.693436    4584 main.go:141] libmachine: STDERR: 
	I0719 16:42:52.693454    4584 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubenet-318000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubenet-318000/disk.qcow2
	I0719 16:42:52.693460    4584 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:42:52.693501    4584 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubenet-318000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubenet-318000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubenet-318000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:75:88:f7:a3:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/kubenet-318000/disk.qcow2
	I0719 16:42:52.695031    4584 main.go:141] libmachine: STDOUT: 
	I0719 16:42:52.695046    4584 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:42:52.695057    4584 client.go:171] LocalClient.Create took 297.314208ms
	I0719 16:42:54.697282    4584 start.go:128] duration metric: createHost completed in 2.358122917s
	I0719 16:42:54.697365    4584 start.go:83] releasing machines lock for "kubenet-318000", held for 2.358645625s
	W0719 16:42:54.697852    4584 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-318000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-318000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:42:54.707681    4584 out.go:177] 
	W0719 16:42:54.712859    4584 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:42:54.712883    4584 out.go:239] * 
	* 
	W0719 16:42:54.715699    4584 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:42:54.724689    4584 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-318000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-318000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.686904583s)

                                                
                                                
-- stdout --
	* [custom-flannel-318000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node custom-flannel-318000 in cluster custom-flannel-318000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-318000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:42:56.891712    4694 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:42:56.891843    4694 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:42:56.891846    4694 out.go:309] Setting ErrFile to fd 2...
	I0719 16:42:56.891849    4694 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:42:56.891962    4694 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:42:56.892999    4694 out.go:303] Setting JSON to false
	I0719 16:42:56.908157    4694 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4347,"bootTime":1689805829,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:42:56.908206    4694 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:42:56.913950    4694 out.go:177] * [custom-flannel-318000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:42:56.921950    4694 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:42:56.921996    4694 notify.go:220] Checking for updates...
	I0719 16:42:56.925908    4694 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:42:56.928939    4694 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:42:56.931920    4694 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:42:56.934857    4694 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:42:56.937915    4694 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:42:56.941254    4694 config.go:182] Loaded profile config "multinode-992000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:42:56.941299    4694 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:42:56.945841    4694 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 16:42:56.952900    4694 start.go:298] selected driver: qemu2
	I0719 16:42:56.952911    4694 start.go:880] validating driver "qemu2" against <nil>
	I0719 16:42:56.952918    4694 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:42:56.954793    4694 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0719 16:42:56.957894    4694 out.go:177] * Automatically selected the socket_vmnet network
	I0719 16:42:56.960938    4694 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 16:42:56.960961    4694 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0719 16:42:56.960987    4694 start_flags.go:314] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0719 16:42:56.960993    4694 start_flags.go:319] config:
	{Name:custom-flannel-318000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:custom-flannel-318000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAu
thSock: SSHAgentPID:0}
	I0719 16:42:56.965035    4694 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:42:56.971916    4694 out.go:177] * Starting control plane node custom-flannel-318000 in cluster custom-flannel-318000
	I0719 16:42:56.975904    4694 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 16:42:56.975929    4694 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 16:42:56.975944    4694 cache.go:57] Caching tarball of preloaded images
	I0719 16:42:56.976018    4694 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 16:42:56.976031    4694 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0719 16:42:56.976096    4694 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/custom-flannel-318000/config.json ...
	I0719 16:42:56.976109    4694 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/custom-flannel-318000/config.json: {Name:mkaa1d0523444bee0acd7fe866de0f031a179237 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:42:56.976315    4694 start.go:365] acquiring machines lock for custom-flannel-318000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:42:56.976348    4694 start.go:369] acquired machines lock for "custom-flannel-318000" in 25.417µs
	I0719 16:42:56.976360    4694 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-318000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:custo
m-flannel-318000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:42:56.976389    4694 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:42:56.984858    4694 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 16:42:57.001711    4694 start.go:159] libmachine.API.Create for "custom-flannel-318000" (driver="qemu2")
	I0719 16:42:57.001744    4694 client.go:168] LocalClient.Create starting
	I0719 16:42:57.001809    4694 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:42:57.001834    4694 main.go:141] libmachine: Decoding PEM data...
	I0719 16:42:57.001842    4694 main.go:141] libmachine: Parsing certificate...
	I0719 16:42:57.001886    4694 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:42:57.001901    4694 main.go:141] libmachine: Decoding PEM data...
	I0719 16:42:57.001907    4694 main.go:141] libmachine: Parsing certificate...
	I0719 16:42:57.002233    4694 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:42:57.118607    4694 main.go:141] libmachine: Creating SSH key...
	I0719 16:42:57.189801    4694 main.go:141] libmachine: Creating Disk image...
	I0719 16:42:57.189810    4694 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:42:57.189958    4694 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/custom-flannel-318000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/custom-flannel-318000/disk.qcow2
	I0719 16:42:57.198436    4694 main.go:141] libmachine: STDOUT: 
	I0719 16:42:57.198453    4694 main.go:141] libmachine: STDERR: 
	I0719 16:42:57.198516    4694 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/custom-flannel-318000/disk.qcow2 +20000M
	I0719 16:42:57.205575    4694 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:42:57.205588    4694 main.go:141] libmachine: STDERR: 
	I0719 16:42:57.205602    4694 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/custom-flannel-318000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/custom-flannel-318000/disk.qcow2
	I0719 16:42:57.205610    4694 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:42:57.205647    4694 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/custom-flannel-318000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/custom-flannel-318000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/custom-flannel-318000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:0a:e8:17:ab:d2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/custom-flannel-318000/disk.qcow2
	I0719 16:42:57.207140    4694 main.go:141] libmachine: STDOUT: 
	I0719 16:42:57.207157    4694 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:42:57.207175    4694 client.go:171] LocalClient.Create took 205.423667ms
	I0719 16:42:59.209315    4694 start.go:128] duration metric: createHost completed in 2.232928334s
	I0719 16:42:59.209702    4694 start.go:83] releasing machines lock for "custom-flannel-318000", held for 2.233363875s
	W0719 16:42:59.209751    4694 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:42:59.216936    4694 out.go:177] * Deleting "custom-flannel-318000" in qemu2 ...
	W0719 16:42:59.236517    4694 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:42:59.236539    4694 start.go:687] Will try again in 5 seconds ...
	I0719 16:43:04.238680    4694 start.go:365] acquiring machines lock for custom-flannel-318000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:43:04.239283    4694 start.go:369] acquired machines lock for "custom-flannel-318000" in 493.458µs
	I0719 16:43:04.239475    4694 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-318000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:custo
m-flannel-318000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:43:04.239742    4694 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:43:04.248439    4694 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 16:43:04.297545    4694 start.go:159] libmachine.API.Create for "custom-flannel-318000" (driver="qemu2")
	I0719 16:43:04.297587    4694 client.go:168] LocalClient.Create starting
	I0719 16:43:04.297785    4694 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:43:04.297846    4694 main.go:141] libmachine: Decoding PEM data...
	I0719 16:43:04.297866    4694 main.go:141] libmachine: Parsing certificate...
	I0719 16:43:04.297951    4694 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:43:04.297987    4694 main.go:141] libmachine: Decoding PEM data...
	I0719 16:43:04.298001    4694 main.go:141] libmachine: Parsing certificate...
	I0719 16:43:04.298590    4694 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:43:04.426167    4694 main.go:141] libmachine: Creating SSH key...
	I0719 16:43:04.492114    4694 main.go:141] libmachine: Creating Disk image...
	I0719 16:43:04.492121    4694 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:43:04.492267    4694 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/custom-flannel-318000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/custom-flannel-318000/disk.qcow2
	I0719 16:43:04.500791    4694 main.go:141] libmachine: STDOUT: 
	I0719 16:43:04.500810    4694 main.go:141] libmachine: STDERR: 
	I0719 16:43:04.500868    4694 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/custom-flannel-318000/disk.qcow2 +20000M
	I0719 16:43:04.507973    4694 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:43:04.507984    4694 main.go:141] libmachine: STDERR: 
	I0719 16:43:04.507998    4694 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/custom-flannel-318000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/custom-flannel-318000/disk.qcow2
	I0719 16:43:04.508002    4694 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:43:04.508033    4694 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/custom-flannel-318000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/custom-flannel-318000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/custom-flannel-318000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:ff:82:6d:e7:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/custom-flannel-318000/disk.qcow2
	I0719 16:43:04.509430    4694 main.go:141] libmachine: STDOUT: 
	I0719 16:43:04.509447    4694 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:43:04.509459    4694 client.go:171] LocalClient.Create took 211.869666ms
	I0719 16:43:06.511702    4694 start.go:128] duration metric: createHost completed in 2.271959s
	I0719 16:43:06.511745    4694 start.go:83] releasing machines lock for "custom-flannel-318000", held for 2.27242525s
	W0719 16:43:06.512082    4694 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-318000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-318000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:43:06.522716    4694 out.go:177] 
	W0719 16:43:06.526692    4694 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:43:06.526719    4694 out.go:239] * 
	* 
	W0719 16:43:06.529380    4694 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:43:06.537607    4694 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-318000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-318000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.877398709s)

                                                
                                                
-- stdout --
	* [calico-318000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node calico-318000 in cluster calico-318000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-318000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:43:08.889153    4812 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:43:08.889285    4812 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:43:08.889290    4812 out.go:309] Setting ErrFile to fd 2...
	I0719 16:43:08.889292    4812 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:43:08.889406    4812 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:43:08.890450    4812 out.go:303] Setting JSON to false
	I0719 16:43:08.905628    4812 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4359,"bootTime":1689805829,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:43:08.905716    4812 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:43:08.911193    4812 out.go:177] * [calico-318000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:43:08.915210    4812 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:43:08.919158    4812 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:43:08.915259    4812 notify.go:220] Checking for updates...
	I0719 16:43:08.923149    4812 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:43:08.926195    4812 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:43:08.929081    4812 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:43:08.932153    4812 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:43:08.935440    4812 config.go:182] Loaded profile config "multinode-992000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:43:08.935479    4812 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:43:08.940046    4812 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 16:43:08.947116    4812 start.go:298] selected driver: qemu2
	I0719 16:43:08.947127    4812 start.go:880] validating driver "qemu2" against <nil>
	I0719 16:43:08.947133    4812 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:43:08.948896    4812 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0719 16:43:08.952128    4812 out.go:177] * Automatically selected the socket_vmnet network
	I0719 16:43:08.955223    4812 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 16:43:08.955246    4812 cni.go:84] Creating CNI manager for "calico"
	I0719 16:43:08.955250    4812 start_flags.go:314] Found "Calico" CNI - setting NetworkPlugin=cni
	I0719 16:43:08.955257    4812 start_flags.go:319] config:
	{Name:calico-318000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:calico-318000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:c
ni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:43:08.959405    4812 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:43:08.966155    4812 out.go:177] * Starting control plane node calico-318000 in cluster calico-318000
	I0719 16:43:08.970137    4812 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 16:43:08.970167    4812 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 16:43:08.970180    4812 cache.go:57] Caching tarball of preloaded images
	I0719 16:43:08.970241    4812 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 16:43:08.970246    4812 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0719 16:43:08.970302    4812 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/calico-318000/config.json ...
	I0719 16:43:08.970314    4812 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/calico-318000/config.json: {Name:mke1d6663e86d6267da484921bfc3becc2674682 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:43:08.970525    4812 start.go:365] acquiring machines lock for calico-318000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:43:08.970554    4812 start.go:369] acquired machines lock for "calico-318000" in 23.667µs
	I0719 16:43:08.970567    4812 start.go:93] Provisioning new machine with config: &{Name:calico-318000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:calico-318000
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:43:08.970592    4812 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:43:08.978137    4812 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 16:43:08.994177    4812 start.go:159] libmachine.API.Create for "calico-318000" (driver="qemu2")
	I0719 16:43:08.994199    4812 client.go:168] LocalClient.Create starting
	I0719 16:43:08.994259    4812 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:43:08.994287    4812 main.go:141] libmachine: Decoding PEM data...
	I0719 16:43:08.994297    4812 main.go:141] libmachine: Parsing certificate...
	I0719 16:43:08.994346    4812 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:43:08.994368    4812 main.go:141] libmachine: Decoding PEM data...
	I0719 16:43:08.994374    4812 main.go:141] libmachine: Parsing certificate...
	I0719 16:43:08.994718    4812 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:43:09.110803    4812 main.go:141] libmachine: Creating SSH key...
	I0719 16:43:09.216274    4812 main.go:141] libmachine: Creating Disk image...
	I0719 16:43:09.216281    4812 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:43:09.216424    4812 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/calico-318000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/calico-318000/disk.qcow2
	I0719 16:43:09.224813    4812 main.go:141] libmachine: STDOUT: 
	I0719 16:43:09.224839    4812 main.go:141] libmachine: STDERR: 
	I0719 16:43:09.224896    4812 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/calico-318000/disk.qcow2 +20000M
	I0719 16:43:09.232015    4812 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:43:09.232031    4812 main.go:141] libmachine: STDERR: 
	I0719 16:43:09.232049    4812 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/calico-318000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/calico-318000/disk.qcow2
	I0719 16:43:09.232056    4812 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:43:09.232090    4812 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/calico-318000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/calico-318000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/calico-318000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:0b:eb:72:31:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/calico-318000/disk.qcow2
	I0719 16:43:09.233651    4812 main.go:141] libmachine: STDOUT: 
	I0719 16:43:09.233664    4812 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:43:09.233679    4812 client.go:171] LocalClient.Create took 239.478834ms
	I0719 16:43:11.235810    4812 start.go:128] duration metric: createHost completed in 2.265225708s
	I0719 16:43:11.235879    4812 start.go:83] releasing machines lock for "calico-318000", held for 2.265340709s
	W0719 16:43:11.235993    4812 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:43:11.243490    4812 out.go:177] * Deleting "calico-318000" in qemu2 ...
	W0719 16:43:11.267154    4812 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:43:11.267177    4812 start.go:687] Will try again in 5 seconds ...
	I0719 16:43:16.269411    4812 start.go:365] acquiring machines lock for calico-318000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:43:16.269895    4812 start.go:369] acquired machines lock for "calico-318000" in 375.166µs
	I0719 16:43:16.270023    4812 start.go:93] Provisioning new machine with config: &{Name:calico-318000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:calico-318000
Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mo
untUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:43:16.270299    4812 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:43:16.279015    4812 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 16:43:16.323304    4812 start.go:159] libmachine.API.Create for "calico-318000" (driver="qemu2")
	I0719 16:43:16.323353    4812 client.go:168] LocalClient.Create starting
	I0719 16:43:16.323573    4812 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:43:16.323632    4812 main.go:141] libmachine: Decoding PEM data...
	I0719 16:43:16.323658    4812 main.go:141] libmachine: Parsing certificate...
	I0719 16:43:16.323773    4812 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:43:16.323809    4812 main.go:141] libmachine: Decoding PEM data...
	I0719 16:43:16.323824    4812 main.go:141] libmachine: Parsing certificate...
	I0719 16:43:16.324367    4812 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:43:16.451262    4812 main.go:141] libmachine: Creating SSH key...
	I0719 16:43:16.679988    4812 main.go:141] libmachine: Creating Disk image...
	I0719 16:43:16.679997    4812 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:43:16.680154    4812 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/calico-318000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/calico-318000/disk.qcow2
	I0719 16:43:16.688774    4812 main.go:141] libmachine: STDOUT: 
	I0719 16:43:16.688789    4812 main.go:141] libmachine: STDERR: 
	I0719 16:43:16.688864    4812 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/calico-318000/disk.qcow2 +20000M
	I0719 16:43:16.696122    4812 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:43:16.696136    4812 main.go:141] libmachine: STDERR: 
	I0719 16:43:16.696154    4812 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/calico-318000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/calico-318000/disk.qcow2
	I0719 16:43:16.696165    4812 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:43:16.696208    4812 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/calico-318000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/calico-318000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/calico-318000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:92:71:e8:2b:82 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/calico-318000/disk.qcow2
	I0719 16:43:16.697692    4812 main.go:141] libmachine: STDOUT: 
	I0719 16:43:16.697703    4812 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:43:16.697715    4812 client.go:171] LocalClient.Create took 374.361875ms
	I0719 16:43:18.699853    4812 start.go:128] duration metric: createHost completed in 2.429557583s
	I0719 16:43:18.699945    4812 start.go:83] releasing machines lock for "calico-318000", held for 2.430024625s
	W0719 16:43:18.700338    4812 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-318000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-318000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:43:18.709068    4812 out.go:177] 
	W0719 16:43:18.714037    4812 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:43:18.714073    4812 out.go:239] * 
	* 
	W0719 16:43:18.716884    4812 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:43:18.725978    4812 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-318000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
E0719 16:43:30.215239    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-318000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.7740515s)

                                                
                                                
-- stdout --
	* [false-318000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node false-318000 in cluster false-318000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-318000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:43:21.083445    4933 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:43:21.083564    4933 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:43:21.083568    4933 out.go:309] Setting ErrFile to fd 2...
	I0719 16:43:21.083571    4933 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:43:21.083681    4933 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:43:21.084671    4933 out.go:303] Setting JSON to false
	I0719 16:43:21.099974    4933 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4372,"bootTime":1689805829,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:43:21.100052    4933 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:43:21.105444    4933 out.go:177] * [false-318000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:43:21.113441    4933 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:43:21.118362    4933 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:43:21.113508    4933 notify.go:220] Checking for updates...
	I0719 16:43:21.124377    4933 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:43:21.127392    4933 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:43:21.130358    4933 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:43:21.133386    4933 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:43:21.136710    4933 config.go:182] Loaded profile config "multinode-992000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:43:21.136751    4933 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:43:21.141350    4933 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 16:43:21.148384    4933 start.go:298] selected driver: qemu2
	I0719 16:43:21.148389    4933 start.go:880] validating driver "qemu2" against <nil>
	I0719 16:43:21.148398    4933 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:43:21.150313    4933 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0719 16:43:21.153250    4933 out.go:177] * Automatically selected the socket_vmnet network
	I0719 16:43:21.156437    4933 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 16:43:21.156457    4933 cni.go:84] Creating CNI manager for "false"
	I0719 16:43:21.156471    4933 start_flags.go:319] config:
	{Name:false-318000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:false-318000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: Fe
atureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:43:21.160530    4933 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:43:21.167345    4933 out.go:177] * Starting control plane node false-318000 in cluster false-318000
	I0719 16:43:21.171322    4933 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 16:43:21.171352    4933 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 16:43:21.171368    4933 cache.go:57] Caching tarball of preloaded images
	I0719 16:43:21.171439    4933 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 16:43:21.171444    4933 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0719 16:43:21.171501    4933 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/false-318000/config.json ...
	I0719 16:43:21.171516    4933 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/false-318000/config.json: {Name:mk26e66a4a9094a75a93c2fe8a4756346050c874 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:43:21.171745    4933 start.go:365] acquiring machines lock for false-318000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:43:21.171777    4933 start.go:369] acquired machines lock for "false-318000" in 25.834µs
	I0719 16:43:21.171791    4933 start.go:93] Provisioning new machine with config: &{Name:false-318000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:false-318000 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:43:21.171827    4933 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:43:21.179362    4933 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 16:43:21.196371    4933 start.go:159] libmachine.API.Create for "false-318000" (driver="qemu2")
	I0719 16:43:21.196408    4933 client.go:168] LocalClient.Create starting
	I0719 16:43:21.196464    4933 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:43:21.196487    4933 main.go:141] libmachine: Decoding PEM data...
	I0719 16:43:21.196502    4933 main.go:141] libmachine: Parsing certificate...
	I0719 16:43:21.196555    4933 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:43:21.196574    4933 main.go:141] libmachine: Decoding PEM data...
	I0719 16:43:21.196581    4933 main.go:141] libmachine: Parsing certificate...
	I0719 16:43:21.196902    4933 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:43:21.312966    4933 main.go:141] libmachine: Creating SSH key...
	I0719 16:43:21.358389    4933 main.go:141] libmachine: Creating Disk image...
	I0719 16:43:21.358394    4933 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:43:21.358534    4933 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/false-318000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/false-318000/disk.qcow2
	I0719 16:43:21.367033    4933 main.go:141] libmachine: STDOUT: 
	I0719 16:43:21.367047    4933 main.go:141] libmachine: STDERR: 
	I0719 16:43:21.367111    4933 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/false-318000/disk.qcow2 +20000M
	I0719 16:43:21.374309    4933 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:43:21.374320    4933 main.go:141] libmachine: STDERR: 
	I0719 16:43:21.374332    4933 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/false-318000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/false-318000/disk.qcow2
	I0719 16:43:21.374338    4933 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:43:21.374370    4933 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/false-318000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/false-318000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/false-318000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:2c:b0:88:e5:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/false-318000/disk.qcow2
	I0719 16:43:21.375897    4933 main.go:141] libmachine: STDOUT: 
	I0719 16:43:21.375910    4933 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:43:21.375926    4933 client.go:171] LocalClient.Create took 179.513792ms
	I0719 16:43:23.378116    4933 start.go:128] duration metric: createHost completed in 2.206295541s
	I0719 16:43:23.378175    4933 start.go:83] releasing machines lock for "false-318000", held for 2.206410458s
	W0719 16:43:23.378232    4933 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:43:23.388249    4933 out.go:177] * Deleting "false-318000" in qemu2 ...
	W0719 16:43:23.409376    4933 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:43:23.409406    4933 start.go:687] Will try again in 5 seconds ...
	I0719 16:43:28.411653    4933 start.go:365] acquiring machines lock for false-318000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:43:28.415813    4933 start.go:369] acquired machines lock for "false-318000" in 3.989542ms
	I0719 16:43:28.416474    4933 start.go:93] Provisioning new machine with config: &{Name:false-318000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:false-318000 N
amespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:43:28.416773    4933 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:43:28.421267    4933 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0719 16:43:28.466647    4933 start.go:159] libmachine.API.Create for "false-318000" (driver="qemu2")
	I0719 16:43:28.466688    4933 client.go:168] LocalClient.Create starting
	I0719 16:43:28.466872    4933 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:43:28.466918    4933 main.go:141] libmachine: Decoding PEM data...
	I0719 16:43:28.466944    4933 main.go:141] libmachine: Parsing certificate...
	I0719 16:43:28.467027    4933 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:43:28.467055    4933 main.go:141] libmachine: Decoding PEM data...
	I0719 16:43:28.467069    4933 main.go:141] libmachine: Parsing certificate...
	I0719 16:43:28.467542    4933 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:43:28.594682    4933 main.go:141] libmachine: Creating SSH key...
	I0719 16:43:28.770779    4933 main.go:141] libmachine: Creating Disk image...
	I0719 16:43:28.770786    4933 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:43:28.770965    4933 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/false-318000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/false-318000/disk.qcow2
	I0719 16:43:28.779769    4933 main.go:141] libmachine: STDOUT: 
	I0719 16:43:28.779793    4933 main.go:141] libmachine: STDERR: 
	I0719 16:43:28.779852    4933 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/false-318000/disk.qcow2 +20000M
	I0719 16:43:28.787069    4933 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:43:28.787094    4933 main.go:141] libmachine: STDERR: 
	I0719 16:43:28.787111    4933 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/false-318000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/false-318000/disk.qcow2
	I0719 16:43:28.787120    4933 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:43:28.787165    4933 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/false-318000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/false-318000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/false-318000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:fb:61:17:00:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/false-318000/disk.qcow2
	I0719 16:43:28.788659    4933 main.go:141] libmachine: STDOUT: 
	I0719 16:43:28.788674    4933 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:43:28.788688    4933 client.go:171] LocalClient.Create took 321.998917ms
	I0719 16:43:30.790822    4933 start.go:128] duration metric: createHost completed in 2.37405325s
	I0719 16:43:30.790885    4933 start.go:83] releasing machines lock for "false-318000", held for 2.374581709s
	W0719 16:43:30.791299    4933 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-318000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-318000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:43:30.801040    4933 out.go:177] 
	W0719 16:43:30.805119    4933 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:43:30.805143    4933 out.go:239] * 
	* 
	W0719 16:43:30.807458    4933 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:43:30.821015    4933 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-870000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-870000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (9.775653042s)

                                                
                                                
-- stdout --
	* [old-k8s-version-870000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node old-k8s-version-870000 in cluster old-k8s-version-870000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-870000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:43:32.968844    5043 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:43:32.968981    5043 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:43:32.968984    5043 out.go:309] Setting ErrFile to fd 2...
	I0719 16:43:32.968987    5043 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:43:32.969092    5043 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:43:32.970106    5043 out.go:303] Setting JSON to false
	I0719 16:43:32.985138    5043 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4383,"bootTime":1689805829,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:43:32.985194    5043 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:43:32.990266    5043 out.go:177] * [old-k8s-version-870000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:43:32.999154    5043 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:43:33.003111    5043 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:43:32.999194    5043 notify.go:220] Checking for updates...
	I0719 16:43:33.009089    5043 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:43:33.012105    5043 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:43:33.015049    5043 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:43:33.018106    5043 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:43:33.021453    5043 config.go:182] Loaded profile config "multinode-992000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:43:33.021505    5043 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:43:33.026027    5043 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 16:43:33.034067    5043 start.go:298] selected driver: qemu2
	I0719 16:43:33.034075    5043 start.go:880] validating driver "qemu2" against <nil>
	I0719 16:43:33.034082    5043 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:43:33.037103    5043 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0719 16:43:33.042088    5043 out.go:177] * Automatically selected the socket_vmnet network
	I0719 16:43:33.048186    5043 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 16:43:33.048209    5043 cni.go:84] Creating CNI manager for ""
	I0719 16:43:33.048224    5043 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0719 16:43:33.048228    5043 start_flags.go:319] config:
	{Name:old-k8s-version-870000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-870000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket
: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:43:33.052812    5043 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:43:33.060106    5043 out.go:177] * Starting control plane node old-k8s-version-870000 in cluster old-k8s-version-870000
	I0719 16:43:33.064141    5043 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0719 16:43:33.064166    5043 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0719 16:43:33.064184    5043 cache.go:57] Caching tarball of preloaded images
	I0719 16:43:33.064251    5043 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 16:43:33.064256    5043 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0719 16:43:33.064317    5043 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/old-k8s-version-870000/config.json ...
	I0719 16:43:33.064331    5043 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/old-k8s-version-870000/config.json: {Name:mkc266aed6ae3dc4bd5f483ba22a2ccb0c14bb4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:43:33.064489    5043 start.go:365] acquiring machines lock for old-k8s-version-870000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:43:33.064518    5043 start.go:369] acquired machines lock for "old-k8s-version-870000" in 22.834µs
	I0719 16:43:33.064528    5043 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-870000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k
8s-version-870000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:43:33.064565    5043 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:43:33.073119    5043 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 16:43:33.089294    5043 start.go:159] libmachine.API.Create for "old-k8s-version-870000" (driver="qemu2")
	I0719 16:43:33.089316    5043 client.go:168] LocalClient.Create starting
	I0719 16:43:33.089384    5043 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:43:33.089407    5043 main.go:141] libmachine: Decoding PEM data...
	I0719 16:43:33.089419    5043 main.go:141] libmachine: Parsing certificate...
	I0719 16:43:33.089466    5043 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:43:33.089484    5043 main.go:141] libmachine: Decoding PEM data...
	I0719 16:43:33.089497    5043 main.go:141] libmachine: Parsing certificate...
	I0719 16:43:33.089823    5043 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:43:33.205100    5043 main.go:141] libmachine: Creating SSH key...
	I0719 16:43:33.317385    5043 main.go:141] libmachine: Creating Disk image...
	I0719 16:43:33.317391    5043 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:43:33.317528    5043 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/disk.qcow2
	I0719 16:43:33.326032    5043 main.go:141] libmachine: STDOUT: 
	I0719 16:43:33.326049    5043 main.go:141] libmachine: STDERR: 
	I0719 16:43:33.326097    5043 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/disk.qcow2 +20000M
	I0719 16:43:33.333413    5043 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:43:33.333430    5043 main.go:141] libmachine: STDERR: 
	I0719 16:43:33.333449    5043 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/disk.qcow2
	I0719 16:43:33.333466    5043 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:43:33.333498    5043 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:79:de:48:73:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/disk.qcow2
	I0719 16:43:33.335001    5043 main.go:141] libmachine: STDOUT: 
	I0719 16:43:33.335014    5043 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:43:33.335032    5043 client.go:171] LocalClient.Create took 245.710959ms
	I0719 16:43:35.337232    5043 start.go:128] duration metric: createHost completed in 2.27266025s
	I0719 16:43:35.337321    5043 start.go:83] releasing machines lock for "old-k8s-version-870000", held for 2.272815167s
	W0719 16:43:35.337445    5043 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:43:35.349151    5043 out.go:177] * Deleting "old-k8s-version-870000" in qemu2 ...
	W0719 16:43:35.369125    5043 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:43:35.369157    5043 start.go:687] Will try again in 5 seconds ...
	I0719 16:43:40.371331    5043 start.go:365] acquiring machines lock for old-k8s-version-870000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:43:40.371837    5043 start.go:369] acquired machines lock for "old-k8s-version-870000" in 380.792µs
	I0719 16:43:40.371941    5043 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-870000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k
8s-version-870000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:43:40.372326    5043 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:43:40.381976    5043 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 16:43:40.430293    5043 start.go:159] libmachine.API.Create for "old-k8s-version-870000" (driver="qemu2")
	I0719 16:43:40.430338    5043 client.go:168] LocalClient.Create starting
	I0719 16:43:40.430525    5043 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:43:40.430586    5043 main.go:141] libmachine: Decoding PEM data...
	I0719 16:43:40.430614    5043 main.go:141] libmachine: Parsing certificate...
	I0719 16:43:40.430700    5043 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:43:40.430730    5043 main.go:141] libmachine: Decoding PEM data...
	I0719 16:43:40.430743    5043 main.go:141] libmachine: Parsing certificate...
	I0719 16:43:40.431307    5043 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:43:40.557349    5043 main.go:141] libmachine: Creating SSH key...
	I0719 16:43:40.655814    5043 main.go:141] libmachine: Creating Disk image...
	I0719 16:43:40.655819    5043 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:43:40.655960    5043 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/disk.qcow2
	I0719 16:43:40.664336    5043 main.go:141] libmachine: STDOUT: 
	I0719 16:43:40.664351    5043 main.go:141] libmachine: STDERR: 
	I0719 16:43:40.664414    5043 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/disk.qcow2 +20000M
	I0719 16:43:40.671464    5043 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:43:40.671478    5043 main.go:141] libmachine: STDERR: 
	I0719 16:43:40.671497    5043 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/disk.qcow2
	I0719 16:43:40.671503    5043 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:43:40.671547    5043 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:95:ad:e1:aa:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/disk.qcow2
	I0719 16:43:40.673044    5043 main.go:141] libmachine: STDOUT: 
	I0719 16:43:40.673059    5043 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:43:40.673072    5043 client.go:171] LocalClient.Create took 242.722209ms
	I0719 16:43:42.675235    5043 start.go:128] duration metric: createHost completed in 2.302907542s
	I0719 16:43:42.675290    5043 start.go:83] releasing machines lock for "old-k8s-version-870000", held for 2.303453792s
	W0719 16:43:42.675736    5043 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-870000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-870000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:43:42.686502    5043 out.go:177] 
	W0719 16:43:42.690314    5043 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:43:42.690345    5043 out.go:239] * 
	* 
	W0719 16:43:42.692842    5043 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:43:42.703445    5043 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-870000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-870000 -n old-k8s-version-870000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-870000 -n old-k8s-version-870000: exit status 7 (67.491083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-870000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-870000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-870000 create -f testdata/busybox.yaml: exit status 1 (29.251958ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-870000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-870000 -n old-k8s-version-870000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-870000 -n old-k8s-version-870000: exit status 7 (28.76175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-870000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-870000 -n old-k8s-version-870000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-870000 -n old-k8s-version-870000: exit status 7 (28.411917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-870000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-870000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-870000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-870000 describe deploy/metrics-server -n kube-system: exit status 1 (25.642ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-870000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-870000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-870000 -n old-k8s-version-870000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-870000 -n old-k8s-version-870000: exit status 7 (28.575083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-870000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-870000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-870000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (5.182576041s)

                                                
                                                
-- stdout --
	* [old-k8s-version-870000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the qemu2 driver based on existing profile
	* Starting control plane node old-k8s-version-870000 in cluster old-k8s-version-870000
	* Restarting existing qemu2 VM for "old-k8s-version-870000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-870000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:43:43.164185    5076 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:43:43.164299    5076 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:43:43.164302    5076 out.go:309] Setting ErrFile to fd 2...
	I0719 16:43:43.164304    5076 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:43:43.164405    5076 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:43:43.165425    5076 out.go:303] Setting JSON to false
	I0719 16:43:43.180290    5076 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4394,"bootTime":1689805829,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:43:43.180344    5076 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:43:43.185315    5076 out.go:177] * [old-k8s-version-870000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:43:43.192222    5076 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:43:43.192284    5076 notify.go:220] Checking for updates...
	I0719 16:43:43.196078    5076 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:43:43.199242    5076 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:43:43.202257    5076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:43:43.205257    5076 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:43:43.208203    5076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:43:43.211532    5076 config.go:182] Loaded profile config "old-k8s-version-870000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0719 16:43:43.215259    5076 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0719 16:43:43.218167    5076 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:43:43.222162    5076 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 16:43:43.229231    5076 start.go:298] selected driver: qemu2
	I0719 16:43:43.229236    5076 start.go:880] validating driver "qemu2" against &{Name:old-k8s-version-870000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-
version-870000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:43:43.229292    5076 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:43:43.231206    5076 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 16:43:43.231228    5076 cni.go:84] Creating CNI manager for ""
	I0719 16:43:43.231235    5076 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0719 16:43:43.231239    5076 start_flags.go:319] config:
	{Name:old-k8s-version-870000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-870000 Namespace:default APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:43:43.235210    5076 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:43:43.242000    5076 out.go:177] * Starting control plane node old-k8s-version-870000 in cluster old-k8s-version-870000
	I0719 16:43:43.246169    5076 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0719 16:43:43.246201    5076 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0719 16:43:43.246218    5076 cache.go:57] Caching tarball of preloaded images
	I0719 16:43:43.246291    5076 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 16:43:43.246296    5076 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0719 16:43:43.246364    5076 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/old-k8s-version-870000/config.json ...
	I0719 16:43:43.246742    5076 start.go:365] acquiring machines lock for old-k8s-version-870000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:43:43.246770    5076 start.go:369] acquired machines lock for "old-k8s-version-870000" in 21.5µs
	I0719 16:43:43.246780    5076 start.go:96] Skipping create...Using existing machine configuration
	I0719 16:43:43.246785    5076 fix.go:54] fixHost starting: 
	I0719 16:43:43.246915    5076 fix.go:102] recreateIfNeeded on old-k8s-version-870000: state=Stopped err=<nil>
	W0719 16:43:43.246924    5076 fix.go:128] unexpected machine state, will restart: <nil>
	I0719 16:43:43.254222    5076 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-870000" ...
	I0719 16:43:43.258313    5076 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:95:ad:e1:aa:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/disk.qcow2
	I0719 16:43:43.260154    5076 main.go:141] libmachine: STDOUT: 
	I0719 16:43:43.260172    5076 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:43:43.260201    5076 fix.go:56] fixHost completed within 13.416666ms
	I0719 16:43:43.260207    5076 start.go:83] releasing machines lock for "old-k8s-version-870000", held for 13.433125ms
	W0719 16:43:43.260214    5076 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:43:43.260257    5076 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:43:43.260261    5076 start.go:687] Will try again in 5 seconds ...
	I0719 16:43:48.262398    5076 start.go:365] acquiring machines lock for old-k8s-version-870000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:43:48.262872    5076 start.go:369] acquired machines lock for "old-k8s-version-870000" in 380.792µs
	I0719 16:43:48.263018    5076 start.go:96] Skipping create...Using existing machine configuration
	I0719 16:43:48.263041    5076 fix.go:54] fixHost starting: 
	I0719 16:43:48.263766    5076 fix.go:102] recreateIfNeeded on old-k8s-version-870000: state=Stopped err=<nil>
	W0719 16:43:48.263794    5076 fix.go:128] unexpected machine state, will restart: <nil>
	I0719 16:43:48.267159    5076 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-870000" ...
	I0719 16:43:48.275382    5076 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:95:ad:e1:aa:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/disk.qcow2
	I0719 16:43:48.284779    5076 main.go:141] libmachine: STDOUT: 
	I0719 16:43:48.284832    5076 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:43:48.284918    5076 fix.go:56] fixHost completed within 21.88075ms
	I0719 16:43:48.284934    5076 start.go:83] releasing machines lock for "old-k8s-version-870000", held for 22.039666ms
	W0719 16:43:48.285221    5076 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-870000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-870000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:43:48.293163    5076 out.go:177] 
	W0719 16:43:48.297215    5076 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:43:48.297262    5076 out.go:239] * 
	* 
	W0719 16:43:48.299625    5076 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:43:48.307160    5076 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-870000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-870000 -n old-k8s-version-870000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-870000 -n old-k8s-version-870000: exit status 7 (67.715958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-870000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (1.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.432079292.exe start -p stopped-upgrade-567000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.432079292.exe start -p stopped-upgrade-567000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.432079292.exe: permission denied (7.616667ms)
version_upgrade_test.go:195: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.432079292.exe start -p stopped-upgrade-567000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.432079292.exe start -p stopped-upgrade-567000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.432079292.exe: permission denied (5.214792ms)
version_upgrade_test.go:195: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.432079292.exe start -p stopped-upgrade-567000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.432079292.exe start -p stopped-upgrade-567000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.432079292.exe: permission denied (1.895917ms)
version_upgrade_test.go:201: legacy v1.6.2 start failed: fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.432079292.exe: permission denied
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (1.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-870000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-870000 -n old-k8s-version-870000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-870000 -n old-k8s-version-870000: exit status 7 (32.974084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-870000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-870000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-870000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-870000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.705708ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-870000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-870000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-870000 -n old-k8s-version-870000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-870000 -n old-k8s-version-870000: exit status 7 (29.052833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-870000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p old-k8s-version-870000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p old-k8s-version-870000 "sudo crictl images -o json": exit status 89 (38.352583ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-870000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p old-k8s-version-870000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p old-k8s-version-870000"
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-870000 -n old-k8s-version-870000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-870000 -n old-k8s-version-870000: exit status 7 (28.543083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-870000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-870000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-870000 --alsologtostderr -v=1: exit status 89 (40.064083ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-870000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:43:48.572306    5097 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:43:48.572679    5097 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:43:48.572686    5097 out.go:309] Setting ErrFile to fd 2...
	I0719 16:43:48.572688    5097 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:43:48.572810    5097 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:43:48.573002    5097 out.go:303] Setting JSON to false
	I0719 16:43:48.573010    5097 mustload.go:65] Loading cluster: old-k8s-version-870000
	I0719 16:43:48.573160    5097 config.go:182] Loaded profile config "old-k8s-version-870000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0719 16:43:48.576656    5097 out.go:177] * The control plane node must be running for this command
	I0719 16:43:48.580607    5097 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-870000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-870000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-870000 -n old-k8s-version-870000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-870000 -n old-k8s-version-870000: exit status 7 (28.74425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-870000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-870000 -n old-k8s-version-870000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-870000 -n old-k8s-version-870000: exit status 7 (28.397417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-870000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-567000
version_upgrade_test.go:218: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p stopped-upgrade-567000: exit status 85 (80.5065ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-318000 sudo cat                              | calico-318000          | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | /etc/docker/daemon.json                                |                        |         |         |                     |                     |
	| ssh     | -p calico-318000 sudo docker                           | calico-318000          | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | system info                                            |                        |         |         |                     |                     |
	| ssh     | -p calico-318000 sudo                                  | calico-318000          | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | systemctl status cri-docker                            |                        |         |         |                     |                     |
	|         | --all --full --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p calico-318000 sudo                                  | calico-318000          | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | systemctl cat cri-docker                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p calico-318000 sudo cat                              | calico-318000          | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf   |                        |         |         |                     |                     |
	| ssh     | -p calico-318000 sudo cat                              | calico-318000          | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service             |                        |         |         |                     |                     |
	| ssh     | -p calico-318000 sudo                                  | calico-318000          | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | cri-dockerd --version                                  |                        |         |         |                     |                     |
	| ssh     | -p calico-318000 sudo                                  | calico-318000          | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | systemctl status containerd                            |                        |         |         |                     |                     |
	|         | --all --full --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p calico-318000 sudo                                  | calico-318000          | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | systemctl cat containerd                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p calico-318000 sudo cat                              | calico-318000          | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | /lib/systemd/system/containerd.service                 |                        |         |         |                     |                     |
	| ssh     | -p calico-318000 sudo cat                              | calico-318000          | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | /etc/containerd/config.toml                            |                        |         |         |                     |                     |
	| ssh     | -p calico-318000 sudo                                  | calico-318000          | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p calico-318000 sudo                                  | calico-318000          | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p calico-318000 sudo                                  | calico-318000          | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p calico-318000 sudo find                             | calico-318000          | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p calico-318000 sudo crio                             | calico-318000          | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p calico-318000                                       | calico-318000          | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT | 19 Jul 23 16:43 PDT |
	| start   | -p false-318000 --memory=3072                          | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --wait-timeout=15m --cni=false                         |                        |         |         |                     |                     |
	|         | --driver=qemu2                                         |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo cat                               | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | /etc/nsswitch.conf                                     |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo cat                               | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | /etc/hosts                                             |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo cat                               | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | /etc/resolv.conf                                       |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo crictl                            | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | pods                                                   |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo crictl ps                         | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | --all                                                  |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo find                              | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | /etc/cni -type f -exec sh -c                           |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo ip a s                            | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	| ssh     | -p false-318000 sudo ip r s                            | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	| ssh     | -p false-318000 sudo                                   | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | iptables-save                                          |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo iptables                          | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | -t nat -L -n -v                                        |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo systemctl                         | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | status kubelet --all --full                            |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo systemctl                         | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | cat kubelet --no-pager                                 |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo                                   | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | journalctl -xeu kubelet --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo cat                               | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                           |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo cat                               | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                           |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo systemctl                         | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | status docker --all --full                             |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo systemctl                         | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | cat docker --no-pager                                  |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo cat                               | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | /etc/docker/daemon.json                                |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo docker                            | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | system info                                            |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo systemctl                         | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | status cri-docker --all --full                         |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo systemctl                         | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | cat cri-docker --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo cat                               | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf   |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo cat                               | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service             |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo                                   | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | cri-dockerd --version                                  |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo systemctl                         | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | status containerd --all --full                         |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo systemctl                         | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | cat containerd --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo cat                               | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | /lib/systemd/system/containerd.service                 |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo cat                               | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | /etc/containerd/config.toml                            |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo                                   | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo systemctl                         | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | status crio --all --full                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo systemctl                         | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | cat crio --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo find                              | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p false-318000 sudo crio                              | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p false-318000                                        | false-318000           | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT | 19 Jul 23 16:43 PDT |
	| start   | -p old-k8s-version-870000                              | old-k8s-version-870000 | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=qemu2                                         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-870000        | old-k8s-version-870000 | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT | 19 Jul 23 16:43 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-870000                              | old-k8s-version-870000 | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT | 19 Jul 23 16:43 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-870000             | old-k8s-version-870000 | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT | 19 Jul 23 16:43 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-870000                              | old-k8s-version-870000 | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=qemu2                                         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                        |         |         |                     |                     |
	| ssh     | -p old-k8s-version-870000 sudo                         | old-k8s-version-870000 | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | crictl images -o json                                  |                        |         |         |                     |                     |
	| pause   | -p old-k8s-version-870000                              | old-k8s-version-870000 | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT |                     |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| delete  | -p old-k8s-version-870000                              | old-k8s-version-870000 | jenkins | v1.31.0 | 19 Jul 23 16:43 PDT | 19 Jul 23 16:43 PDT |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/19 16:43:43
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 16:43:43.164185    5076 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:43:43.164299    5076 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:43:43.164302    5076 out.go:309] Setting ErrFile to fd 2...
	I0719 16:43:43.164304    5076 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:43:43.164405    5076 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:43:43.165425    5076 out.go:303] Setting JSON to false
	I0719 16:43:43.180290    5076 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4394,"bootTime":1689805829,"procs":402,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:43:43.180344    5076 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:43:43.185315    5076 out.go:177] * [old-k8s-version-870000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:43:43.192222    5076 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:43:43.192284    5076 notify.go:220] Checking for updates...
	I0719 16:43:43.196078    5076 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:43:43.199242    5076 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:43:43.202257    5076 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:43:43.205257    5076 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:43:43.208203    5076 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:43:43.211532    5076 config.go:182] Loaded profile config "old-k8s-version-870000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0719 16:43:43.215259    5076 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0719 16:43:43.218167    5076 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:43:43.222162    5076 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 16:43:43.229231    5076 start.go:298] selected driver: qemu2
	I0719 16:43:43.229236    5076 start.go:880] validating driver "qemu2" against &{Name:old-k8s-version-870000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-
version-870000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mou
nt:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:43:43.229292    5076 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:43:43.231206    5076 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 16:43:43.231228    5076 cni.go:84] Creating CNI manager for ""
	I0719 16:43:43.231235    5076 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0719 16:43:43.231239    5076 start_flags.go:319] config:
	{Name:old-k8s-version-870000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-870000 Namespace:default APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:43:43.235210    5076 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:43:43.242000    5076 out.go:177] * Starting control plane node old-k8s-version-870000 in cluster old-k8s-version-870000
	I0719 16:43:43.246169    5076 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0719 16:43:43.246201    5076 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0719 16:43:43.246218    5076 cache.go:57] Caching tarball of preloaded images
	I0719 16:43:43.246291    5076 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 16:43:43.246296    5076 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0719 16:43:43.246364    5076 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/old-k8s-version-870000/config.json ...
	I0719 16:43:43.246742    5076 start.go:365] acquiring machines lock for old-k8s-version-870000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:43:43.246770    5076 start.go:369] acquired machines lock for "old-k8s-version-870000" in 21.5µs
	I0719 16:43:43.246780    5076 start.go:96] Skipping create...Using existing machine configuration
	I0719 16:43:43.246785    5076 fix.go:54] fixHost starting: 
	I0719 16:43:43.246915    5076 fix.go:102] recreateIfNeeded on old-k8s-version-870000: state=Stopped err=<nil>
	W0719 16:43:43.246924    5076 fix.go:128] unexpected machine state, will restart: <nil>
	I0719 16:43:43.254222    5076 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-870000" ...
	I0719 16:43:43.258313    5076 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:95:ad:e1:aa:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/disk.qcow2
	I0719 16:43:43.260154    5076 main.go:141] libmachine: STDOUT: 
	I0719 16:43:43.260172    5076 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:43:43.260201    5076 fix.go:56] fixHost completed within 13.416666ms
	I0719 16:43:43.260207    5076 start.go:83] releasing machines lock for "old-k8s-version-870000", held for 13.433125ms
	W0719 16:43:43.260214    5076 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:43:43.260257    5076 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:43:43.260261    5076 start.go:687] Will try again in 5 seconds ...
	I0719 16:43:48.262398    5076 start.go:365] acquiring machines lock for old-k8s-version-870000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:43:48.262872    5076 start.go:369] acquired machines lock for "old-k8s-version-870000" in 380.792µs
	I0719 16:43:48.263018    5076 start.go:96] Skipping create...Using existing machine configuration
	I0719 16:43:48.263041    5076 fix.go:54] fixHost starting: 
	I0719 16:43:48.263766    5076 fix.go:102] recreateIfNeeded on old-k8s-version-870000: state=Stopped err=<nil>
	W0719 16:43:48.263794    5076 fix.go:128] unexpected machine state, will restart: <nil>
	I0719 16:43:48.267159    5076 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-870000" ...
	I0719 16:43:48.275382    5076 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:95:ad:e1:aa:f2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/old-k8s-version-870000/disk.qcow2
	I0719 16:43:48.284779    5076 main.go:141] libmachine: STDOUT: 
	I0719 16:43:48.284832    5076 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:43:48.284918    5076 fix.go:56] fixHost completed within 21.88075ms
	I0719 16:43:48.284934    5076 start.go:83] releasing machines lock for "old-k8s-version-870000", held for 22.039666ms
	W0719 16:43:48.285221    5076 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-870000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:43:48.293163    5076 out.go:177] 
	W0719 16:43:48.297215    5076 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:43:48.297262    5076 out.go:239] * 
	W0719 16:43:48.299625    5076 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:43:48.307160    5076 out.go:177] 
	
	* 
	* Profile "stopped-upgrade-567000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p stopped-upgrade-567000"

                                                
                                                
-- /stdout --
version_upgrade_test.go:220: `minikube logs` after upgrade to HEAD from v1.6.2 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-512000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-512000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (9.834499917s)

                                                
                                                
-- stdout --
	* [no-preload-512000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node no-preload-512000 in cluster no-preload-512000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-512000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:43:49.043254    5133 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:43:49.043368    5133 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:43:49.043371    5133 out.go:309] Setting ErrFile to fd 2...
	I0719 16:43:49.043374    5133 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:43:49.043486    5133 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:43:49.044585    5133 out.go:303] Setting JSON to false
	I0719 16:43:49.061361    5133 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4400,"bootTime":1689805829,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:43:49.061441    5133 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:43:49.066554    5133 out.go:177] * [no-preload-512000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:43:49.073551    5133 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:43:49.077547    5133 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:43:49.073594    5133 notify.go:220] Checking for updates...
	I0719 16:43:49.081621    5133 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:43:49.084551    5133 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:43:49.091555    5133 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:43:49.099558    5133 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:43:49.103795    5133 config.go:182] Loaded profile config "multinode-992000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:43:49.103840    5133 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:43:49.106528    5133 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 16:43:49.112601    5133 start.go:298] selected driver: qemu2
	I0719 16:43:49.112606    5133 start.go:880] validating driver "qemu2" against <nil>
	I0719 16:43:49.112613    5133 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:43:49.114507    5133 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0719 16:43:49.117605    5133 out.go:177] * Automatically selected the socket_vmnet network
	I0719 16:43:49.125611    5133 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 16:43:49.125628    5133 cni.go:84] Creating CNI manager for ""
	I0719 16:43:49.125635    5133 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 16:43:49.125638    5133 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 16:43:49.125649    5133 start_flags.go:319] config:
	{Name:no-preload-512000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-512000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:43:49.129913    5133 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:43:49.136503    5133 out.go:177] * Starting control plane node no-preload-512000 in cluster no-preload-512000
	I0719 16:43:49.140530    5133 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 16:43:49.140610    5133 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/no-preload-512000/config.json ...
	I0719 16:43:49.140633    5133 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/no-preload-512000/config.json: {Name:mkd3ed47edac384f051788c83d61b79d792798cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:43:49.140624    5133 cache.go:107] acquiring lock: {Name:mk7803e5d16883f92db6d35161b7ee419dcd1d5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:43:49.140633    5133 cache.go:107] acquiring lock: {Name:mk38d0e566cc3b727f6d35558a949eea62d6ec1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:43:49.140656    5133 cache.go:107] acquiring lock: {Name:mkf49768cca5dc6256d754c4880d40f09b47e2aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:43:49.140702    5133 cache.go:115] /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0719 16:43:49.140709    5133 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 89.709µs
	I0719 16:43:49.140715    5133 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0719 16:43:49.140698    5133 cache.go:107] acquiring lock: {Name:mkc0ade9957db99f285c31c18790c183279515f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:43:49.140781    5133 cache.go:107] acquiring lock: {Name:mk015612985bb56b49178b035bc5604040e48455 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:43:49.141039    5133 cache.go:107] acquiring lock: {Name:mk5f32070a09acedc94fa2420b3a18e2d8ffdf02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:43:49.141073    5133 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.27.3
	I0719 16:43:49.141079    5133 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0719 16:43:49.141183    5133 cache.go:107] acquiring lock: {Name:mkc728d21f41c06fb79278533384979c3caf45c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:43:49.141199    5133 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0719 16:43:49.141324    5133 start.go:365] acquiring machines lock for no-preload-512000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:43:49.141369    5133 start.go:369] acquired machines lock for "no-preload-512000" in 36.25µs
	I0719 16:43:49.141383    5133 start.go:93] Provisioning new machine with config: &{Name:no-preload-512000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:no-preloa
d-512000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:43:49.141443    5133 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:43:49.141466    5133 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0719 16:43:49.144627    5133 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 16:43:49.141329    5133 cache.go:107] acquiring lock: {Name:mka3f59e19c12312229668a277b6da2269c1bcb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:43:49.141626    5133 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.27.3
	I0719 16:43:49.141900    5133 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.27.3
	I0719 16:43:49.145298    5133 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0719 16:43:49.148395    5133 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0719 16:43:49.151262    5133 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.27.3
	I0719 16:43:49.151611    5133 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.27.3
	I0719 16:43:49.151626    5133 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0719 16:43:49.152370    5133 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0719 16:43:49.154335    5133 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0719 16:43:49.154391    5133 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.27.3
	I0719 16:43:49.158994    5133 start.go:159] libmachine.API.Create for "no-preload-512000" (driver="qemu2")
	I0719 16:43:49.159013    5133 client.go:168] LocalClient.Create starting
	I0719 16:43:49.159076    5133 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:43:49.159097    5133 main.go:141] libmachine: Decoding PEM data...
	I0719 16:43:49.159107    5133 main.go:141] libmachine: Parsing certificate...
	I0719 16:43:49.159157    5133 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:43:49.159173    5133 main.go:141] libmachine: Decoding PEM data...
	I0719 16:43:49.159179    5133 main.go:141] libmachine: Parsing certificate...
	I0719 16:43:49.159516    5133 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:43:49.358585    5133 main.go:141] libmachine: Creating SSH key...
	I0719 16:43:49.441198    5133 main.go:141] libmachine: Creating Disk image...
	I0719 16:43:49.441212    5133 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:43:49.441424    5133 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/no-preload-512000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/no-preload-512000/disk.qcow2
	I0719 16:43:49.449913    5133 main.go:141] libmachine: STDOUT: 
	I0719 16:43:49.449926    5133 main.go:141] libmachine: STDERR: 
	I0719 16:43:49.449982    5133 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/no-preload-512000/disk.qcow2 +20000M
	I0719 16:43:49.457907    5133 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:43:49.457925    5133 main.go:141] libmachine: STDERR: 
	I0719 16:43:49.457946    5133 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/no-preload-512000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/no-preload-512000/disk.qcow2
	I0719 16:43:49.457953    5133 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:43:49.457995    5133 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/no-preload-512000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/no-preload-512000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/no-preload-512000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:fb:06:94:0a:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/no-preload-512000/disk.qcow2
	I0719 16:43:49.459698    5133 main.go:141] libmachine: STDOUT: 
	I0719 16:43:49.459709    5133 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:43:49.459731    5133 client.go:171] LocalClient.Create took 300.717958ms
	I0719 16:43:50.239631    5133 cache.go:162] opening:  /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3
	I0719 16:43:50.288978    5133 cache.go:162] opening:  /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0
	I0719 16:43:50.393020    5133 cache.go:162] opening:  /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3
	I0719 16:43:50.520126    5133 cache.go:162] opening:  /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0719 16:43:50.590026    5133 cache.go:162] opening:  /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3
	I0719 16:43:50.627346    5133 cache.go:157] /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0719 16:43:50.627356    5133 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 1.486715458s
	I0719 16:43:50.627362    5133 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0719 16:43:50.778206    5133 cache.go:162] opening:  /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0719 16:43:50.986202    5133 cache.go:162] opening:  /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3
	I0719 16:43:51.460030    5133 start.go:128] duration metric: createHost completed in 2.318582167s
	I0719 16:43:51.460080    5133 start.go:83] releasing machines lock for "no-preload-512000", held for 2.31872775s
	W0719 16:43:51.460138    5133 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:43:51.477484    5133 out.go:177] * Deleting "no-preload-512000" in qemu2 ...
	W0719 16:43:51.493858    5133 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:43:51.494034    5133 start.go:687] Will try again in 5 seconds ...
	I0719 16:43:52.502450    5133 cache.go:157] /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3 exists
	I0719 16:43:52.502506    5133 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.27.3" -> "/Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3" took 3.36185775s
	I0719 16:43:52.502568    5133 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.27.3 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3 succeeded
	I0719 16:43:53.045147    5133 cache.go:157] /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0719 16:43:53.045192    5133 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 3.903904s
	I0719 16:43:53.045247    5133 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0719 16:43:54.109361    5133 cache.go:157] /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3 exists
	I0719 16:43:54.109407    5133 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.27.3" -> "/Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3" took 4.968674375s
	I0719 16:43:54.109435    5133 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.27.3 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3 succeeded
	I0719 16:43:54.712573    5133 cache.go:157] /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3 exists
	I0719 16:43:54.712621    5133 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.27.3" -> "/Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3" took 5.5720595s
	I0719 16:43:54.712660    5133 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.27.3 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3 succeeded
	I0719 16:43:55.007677    5133 cache.go:157] /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3 exists
	I0719 16:43:55.007726    5133 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.27.3" -> "/Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3" took 5.867045875s
	I0719 16:43:55.007766    5133 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.27.3 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3 succeeded
	I0719 16:43:56.494308    5133 start.go:365] acquiring machines lock for no-preload-512000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:43:56.494854    5133 start.go:369] acquired machines lock for "no-preload-512000" in 450.5µs
	I0719 16:43:56.494993    5133 start.go:93] Provisioning new machine with config: &{Name:no-preload-512000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:no-preloa
d-512000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:43:56.495285    5133 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:43:56.504848    5133 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 16:43:56.551969    5133 start.go:159] libmachine.API.Create for "no-preload-512000" (driver="qemu2")
	I0719 16:43:56.552039    5133 client.go:168] LocalClient.Create starting
	I0719 16:43:56.552219    5133 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:43:56.552292    5133 main.go:141] libmachine: Decoding PEM data...
	I0719 16:43:56.552315    5133 main.go:141] libmachine: Parsing certificate...
	I0719 16:43:56.552403    5133 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:43:56.552432    5133 main.go:141] libmachine: Decoding PEM data...
	I0719 16:43:56.552452    5133 main.go:141] libmachine: Parsing certificate...
	I0719 16:43:56.552934    5133 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:43:56.683729    5133 main.go:141] libmachine: Creating SSH key...
	I0719 16:43:56.785321    5133 main.go:141] libmachine: Creating Disk image...
	I0719 16:43:56.785327    5133 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:43:56.785465    5133 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/no-preload-512000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/no-preload-512000/disk.qcow2
	I0719 16:43:56.793937    5133 main.go:141] libmachine: STDOUT: 
	I0719 16:43:56.793954    5133 main.go:141] libmachine: STDERR: 
	I0719 16:43:56.794009    5133 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/no-preload-512000/disk.qcow2 +20000M
	I0719 16:43:56.801302    5133 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:43:56.801315    5133 main.go:141] libmachine: STDERR: 
	I0719 16:43:56.801335    5133 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/no-preload-512000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/no-preload-512000/disk.qcow2
	I0719 16:43:56.801341    5133 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:43:56.801383    5133 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/no-preload-512000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/no-preload-512000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/no-preload-512000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:cd:5a:cc:db:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/no-preload-512000/disk.qcow2
	I0719 16:43:56.802862    5133 main.go:141] libmachine: STDOUT: 
	I0719 16:43:56.802892    5133 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:43:56.802905    5133 client.go:171] LocalClient.Create took 250.861584ms
	I0719 16:43:58.803295    5133 start.go:128] duration metric: createHost completed in 2.308008s
	I0719 16:43:58.803349    5133 start.go:83] releasing machines lock for "no-preload-512000", held for 2.3084945s
	W0719 16:43:58.803561    5133 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-512000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-512000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:43:58.824979    5133 out.go:177] 
	W0719 16:43:58.829146    5133 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:43:58.829176    5133 out.go:239] * 
	* 
	W0719 16:43:58.830789    5133 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:43:58.840978    5133 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-512000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-512000 -n no-preload-512000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-512000 -n no-preload-512000: exit status 7 (45.867583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-512000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-279000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-279000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (11.990982041s)

                                                
                                                
-- stdout --
	* [embed-certs-279000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node embed-certs-279000 in cluster embed-certs-279000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-279000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:43:49.149199    5142 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:43:49.149307    5142 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:43:49.149310    5142 out.go:309] Setting ErrFile to fd 2...
	I0719 16:43:49.149313    5142 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:43:49.149422    5142 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:43:49.150692    5142 out.go:303] Setting JSON to false
	I0719 16:43:49.168175    5142 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4400,"bootTime":1689805829,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:43:49.168268    5142 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:43:49.174551    5142 out.go:177] * [embed-certs-279000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:43:49.182559    5142 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:43:49.178594    5142 notify.go:220] Checking for updates...
	I0719 16:43:49.190494    5142 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:43:49.197538    5142 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:43:49.205529    5142 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:43:49.213573    5142 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:43:49.221522    5142 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:43:49.225909    5142 config.go:182] Loaded profile config "multinode-992000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:43:49.225972    5142 config.go:182] Loaded profile config "no-preload-512000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:43:49.226019    5142 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:43:49.230906    5142 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 16:43:49.234560    5142 start.go:298] selected driver: qemu2
	I0719 16:43:49.234567    5142 start.go:880] validating driver "qemu2" against <nil>
	I0719 16:43:49.234573    5142 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:43:49.236551    5142 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0719 16:43:49.241543    5142 out.go:177] * Automatically selected the socket_vmnet network
	I0719 16:43:49.245758    5142 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 16:43:49.245782    5142 cni.go:84] Creating CNI manager for ""
	I0719 16:43:49.245821    5142 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 16:43:49.245826    5142 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 16:43:49.245831    5142 start_flags.go:319] config:
	{Name:embed-certs-279000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-279000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:43:49.250166    5142 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:43:49.256575    5142 out.go:177] * Starting control plane node embed-certs-279000 in cluster embed-certs-279000
	I0719 16:43:49.264550    5142 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 16:43:49.264580    5142 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 16:43:49.264590    5142 cache.go:57] Caching tarball of preloaded images
	I0719 16:43:49.264647    5142 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 16:43:49.264652    5142 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0719 16:43:49.264715    5142 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/embed-certs-279000/config.json ...
	I0719 16:43:49.264729    5142 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/embed-certs-279000/config.json: {Name:mk26c0930d036567dac839205fa42d69aa9a59c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:43:49.264974    5142 start.go:365] acquiring machines lock for embed-certs-279000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:43:51.460197    5142 start.go:369] acquired machines lock for "embed-certs-279000" in 2.195219917s
	I0719 16:43:51.460300    5142 start.go:93] Provisioning new machine with config: &{Name:embed-certs-279000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-cer
ts-279000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:43:51.460599    5142 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:43:51.470282    5142 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 16:43:51.517995    5142 start.go:159] libmachine.API.Create for "embed-certs-279000" (driver="qemu2")
	I0719 16:43:51.518043    5142 client.go:168] LocalClient.Create starting
	I0719 16:43:51.518165    5142 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:43:51.518210    5142 main.go:141] libmachine: Decoding PEM data...
	I0719 16:43:51.518248    5142 main.go:141] libmachine: Parsing certificate...
	I0719 16:43:51.518299    5142 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:43:51.518327    5142 main.go:141] libmachine: Decoding PEM data...
	I0719 16:43:51.518343    5142 main.go:141] libmachine: Parsing certificate...
	I0719 16:43:51.518934    5142 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:43:51.649964    5142 main.go:141] libmachine: Creating SSH key...
	I0719 16:43:51.688948    5142 main.go:141] libmachine: Creating Disk image...
	I0719 16:43:51.688960    5142 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:43:51.689112    5142 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/embed-certs-279000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/embed-certs-279000/disk.qcow2
	I0719 16:43:51.697742    5142 main.go:141] libmachine: STDOUT: 
	I0719 16:43:51.697758    5142 main.go:141] libmachine: STDERR: 
	I0719 16:43:51.697808    5142 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/embed-certs-279000/disk.qcow2 +20000M
	I0719 16:43:51.705098    5142 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:43:51.705111    5142 main.go:141] libmachine: STDERR: 
	I0719 16:43:51.705131    5142 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/embed-certs-279000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/embed-certs-279000/disk.qcow2
	I0719 16:43:51.705137    5142 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:43:51.705174    5142 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/embed-certs-279000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/embed-certs-279000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/embed-certs-279000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:3f:65:95:58:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/embed-certs-279000/disk.qcow2
	I0719 16:43:51.706687    5142 main.go:141] libmachine: STDOUT: 
	I0719 16:43:51.706699    5142 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:43:51.706715    5142 client.go:171] LocalClient.Create took 188.665542ms
	I0719 16:43:53.708863    5142 start.go:128] duration metric: createHost completed in 2.248212667s
	I0719 16:43:53.708955    5142 start.go:83] releasing machines lock for "embed-certs-279000", held for 2.24875075s
	W0719 16:43:53.709044    5142 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:43:53.715411    5142 out.go:177] * Deleting "embed-certs-279000" in qemu2 ...
	W0719 16:43:53.741068    5142 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:43:53.741095    5142 start.go:687] Will try again in 5 seconds ...
	I0719 16:43:58.741467    5142 start.go:365] acquiring machines lock for embed-certs-279000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:43:58.803450    5142 start.go:369] acquired machines lock for "embed-certs-279000" in 61.840208ms
	I0719 16:43:58.803626    5142 start.go:93] Provisioning new machine with config: &{Name:embed-certs-279000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-cer
ts-279000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:43:58.803886    5142 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:43:58.812980    5142 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 16:43:58.858772    5142 start.go:159] libmachine.API.Create for "embed-certs-279000" (driver="qemu2")
	I0719 16:43:58.858880    5142 client.go:168] LocalClient.Create starting
	I0719 16:43:58.859077    5142 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:43:58.859133    5142 main.go:141] libmachine: Decoding PEM data...
	I0719 16:43:58.859164    5142 main.go:141] libmachine: Parsing certificate...
	I0719 16:43:58.859268    5142 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:43:58.859303    5142 main.go:141] libmachine: Decoding PEM data...
	I0719 16:43:58.859321    5142 main.go:141] libmachine: Parsing certificate...
	I0719 16:43:58.859958    5142 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:43:59.004543    5142 main.go:141] libmachine: Creating SSH key...
	I0719 16:43:59.041188    5142 main.go:141] libmachine: Creating Disk image...
	I0719 16:43:59.041196    5142 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:43:59.041343    5142 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/embed-certs-279000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/embed-certs-279000/disk.qcow2
	I0719 16:43:59.049952    5142 main.go:141] libmachine: STDOUT: 
	I0719 16:43:59.049968    5142 main.go:141] libmachine: STDERR: 
	I0719 16:43:59.050038    5142 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/embed-certs-279000/disk.qcow2 +20000M
	I0719 16:43:59.057719    5142 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:43:59.057736    5142 main.go:141] libmachine: STDERR: 
	I0719 16:43:59.057748    5142 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/embed-certs-279000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/embed-certs-279000/disk.qcow2
	I0719 16:43:59.057756    5142 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:43:59.057807    5142 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/embed-certs-279000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/embed-certs-279000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/embed-certs-279000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:45:e8:62:a8:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/embed-certs-279000/disk.qcow2
	I0719 16:43:59.059410    5142 main.go:141] libmachine: STDOUT: 
	I0719 16:43:59.059423    5142 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:43:59.059445    5142 client.go:171] LocalClient.Create took 200.553625ms
	I0719 16:44:01.061706    5142 start.go:128] duration metric: createHost completed in 2.257789125s
	I0719 16:44:01.061785    5142 start.go:83] releasing machines lock for "embed-certs-279000", held for 2.258332875s
	W0719 16:44:01.062232    5142 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-279000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-279000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:44:01.075760    5142 out.go:177] 
	W0719 16:44:01.081745    5142 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:44:01.081808    5142 out.go:239] * 
	* 
	W0719 16:44:01.084691    5142 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:44:01.100661    5142 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-279000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-279000 -n embed-certs-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-279000 -n embed-certs-279000: exit status 7 (68.728958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-279000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (12.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-512000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-512000 create -f testdata/busybox.yaml: exit status 1 (30.652417ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-512000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-512000 -n no-preload-512000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-512000 -n no-preload-512000: exit status 7 (33.248083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-512000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-512000 -n no-preload-512000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-512000 -n no-preload-512000: exit status 7 (31.807125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-512000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-512000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-512000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-512000 describe deploy/metrics-server -n kube-system: exit status 1 (26.044834ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-512000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-512000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-512000 -n no-preload-512000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-512000 -n no-preload-512000: exit status 7 (28.313833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-512000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (6.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-512000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-512000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (6.886559833s)

                                                
                                                
-- stdout --
	* [no-preload-512000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node no-preload-512000 in cluster no-preload-512000
	* Restarting existing qemu2 VM for "no-preload-512000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-512000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:43:59.290372    5283 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:43:59.290504    5283 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:43:59.290507    5283 out.go:309] Setting ErrFile to fd 2...
	I0719 16:43:59.290509    5283 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:43:59.290638    5283 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:43:59.291606    5283 out.go:303] Setting JSON to false
	I0719 16:43:59.306731    5283 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4410,"bootTime":1689805829,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:43:59.306791    5283 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:43:59.310456    5283 out.go:177] * [no-preload-512000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:43:59.317530    5283 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:43:59.320489    5283 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:43:59.317582    5283 notify.go:220] Checking for updates...
	I0719 16:43:59.324553    5283 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:43:59.327512    5283 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:43:59.330515    5283 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:43:59.333460    5283 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:43:59.336796    5283 config.go:182] Loaded profile config "no-preload-512000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:43:59.337047    5283 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:43:59.341450    5283 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 16:43:59.348490    5283 start.go:298] selected driver: qemu2
	I0719 16:43:59.348495    5283 start.go:880] validating driver "qemu2" against &{Name:no-preload-512000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-5
12000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:43:59.348559    5283 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:43:59.350499    5283 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 16:43:59.350523    5283 cni.go:84] Creating CNI manager for ""
	I0719 16:43:59.350529    5283 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 16:43:59.350535    5283 start_flags.go:319] config:
	{Name:no-preload-512000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-512000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:43:59.354446    5283 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:43:59.361497    5283 out.go:177] * Starting control plane node no-preload-512000 in cluster no-preload-512000
	I0719 16:43:59.365483    5283 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 16:43:59.365594    5283 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/no-preload-512000/config.json ...
	I0719 16:43:59.365644    5283 cache.go:107] acquiring lock: {Name:mk7803e5d16883f92db6d35161b7ee419dcd1d5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:43:59.365657    5283 cache.go:107] acquiring lock: {Name:mk38d0e566cc3b727f6d35558a949eea62d6ec1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:43:59.365686    5283 cache.go:107] acquiring lock: {Name:mkc728d21f41c06fb79278533384979c3caf45c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:43:59.365708    5283 cache.go:115] /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0719 16:43:59.365738    5283 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 94.333µs
	I0719 16:43:59.365744    5283 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0719 16:43:59.365749    5283 cache.go:107] acquiring lock: {Name:mk5f32070a09acedc94fa2420b3a18e2d8ffdf02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:43:59.365766    5283 cache.go:107] acquiring lock: {Name:mka3f59e19c12312229668a277b6da2269c1bcb5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:43:59.365777    5283 cache.go:115] /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3 exists
	I0719 16:43:59.365783    5283 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.27.3" -> "/Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3" took 136.333µs
	I0719 16:43:59.365787    5283 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.27.3 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3 succeeded
	I0719 16:43:59.365820    5283 cache.go:107] acquiring lock: {Name:mk015612985bb56b49178b035bc5604040e48455 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:43:59.365805    5283 cache.go:115] /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3 exists
	I0719 16:43:59.365828    5283 cache.go:115] /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0719 16:43:59.365874    5283 cache.go:107] acquiring lock: {Name:mkf49768cca5dc6256d754c4880d40f09b47e2aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:43:59.365891    5283 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.27.3" -> "/Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3" took 211.583µs
	I0719 16:43:59.365890    5283 cache.go:115] /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3 exists
	I0719 16:43:59.365898    5283 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.27.3 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3 succeeded
	I0719 16:43:59.365894    5283 cache.go:107] acquiring lock: {Name:mkc0ade9957db99f285c31c18790c183279515f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:43:59.365901    5283 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.27.3" -> "/Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3" took 115.666µs
	I0719 16:43:59.365911    5283 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.27.3 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3 succeeded
	I0719 16:43:59.365909    5283 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0719 16:43:59.365940    5283 start.go:365] acquiring machines lock for no-preload-512000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:43:59.365922    5283 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 124.958µs
	I0719 16:43:59.365962    5283 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0719 16:43:59.365978    5283 cache.go:115] /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0719 16:43:59.365952    5283 cache.go:115] /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3 exists
	I0719 16:43:59.365983    5283 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 188.042µs
	I0719 16:43:59.365992    5283 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0719 16:43:59.365989    5283 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.27.3" -> "/Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3" took 124.541µs
	I0719 16:43:59.365998    5283 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.27.3 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3 succeeded
	I0719 16:43:59.370294    5283 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0719 16:44:00.405846    5283 cache.go:162] opening:  /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0
	I0719 16:44:01.061937    5283 start.go:369] acquired machines lock for "no-preload-512000" in 1.695981166s
	I0719 16:44:01.062109    5283 start.go:96] Skipping create...Using existing machine configuration
	I0719 16:44:01.062145    5283 fix.go:54] fixHost starting: 
	I0719 16:44:01.062813    5283 fix.go:102] recreateIfNeeded on no-preload-512000: state=Stopped err=<nil>
	W0719 16:44:01.062854    5283 fix.go:128] unexpected machine state, will restart: <nil>
	I0719 16:44:01.075737    5283 out.go:177] * Restarting existing qemu2 VM for "no-preload-512000" ...
	I0719 16:44:01.077707    5283 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/no-preload-512000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/no-preload-512000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/no-preload-512000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:cd:5a:cc:db:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/no-preload-512000/disk.qcow2
	I0719 16:44:01.088125    5283 main.go:141] libmachine: STDOUT: 
	I0719 16:44:01.088230    5283 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:44:01.088324    5283 fix.go:56] fixHost completed within 26.19525ms
	I0719 16:44:01.088341    5283 start.go:83] releasing machines lock for "no-preload-512000", held for 26.367166ms
	W0719 16:44:01.088385    5283 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:44:01.088536    5283 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:44:01.088554    5283 start.go:687] Will try again in 5 seconds ...
	I0719 16:44:06.088774    5283 start.go:365] acquiring machines lock for no-preload-512000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:44:06.089263    5283 start.go:369] acquired machines lock for "no-preload-512000" in 395.75µs
	I0719 16:44:06.089413    5283 start.go:96] Skipping create...Using existing machine configuration
	I0719 16:44:06.089435    5283 fix.go:54] fixHost starting: 
	I0719 16:44:06.090203    5283 fix.go:102] recreateIfNeeded on no-preload-512000: state=Stopped err=<nil>
	W0719 16:44:06.090229    5283 fix.go:128] unexpected machine state, will restart: <nil>
	I0719 16:44:06.094957    5283 out.go:177] * Restarting existing qemu2 VM for "no-preload-512000" ...
	I0719 16:44:06.102964    5283 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/no-preload-512000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/no-preload-512000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/no-preload-512000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:cd:5a:cc:db:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/no-preload-512000/disk.qcow2
	I0719 16:44:06.112636    5283 main.go:141] libmachine: STDOUT: 
	I0719 16:44:06.112694    5283 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:44:06.112776    5283 fix.go:56] fixHost completed within 23.34375ms
	I0719 16:44:06.112801    5283 start.go:83] releasing machines lock for "no-preload-512000", held for 23.513959ms
	W0719 16:44:06.113007    5283 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-512000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-512000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:44:06.120823    5283 out.go:177] 
	W0719 16:44:06.124848    5283 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:44:06.124882    5283 out.go:239] * 
	* 
	W0719 16:44:06.127785    5283 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:44:06.138874    5283 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-512000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-512000 -n no-preload-512000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-512000 -n no-preload-512000: exit status 7 (69.979917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-512000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (6.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-279000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-279000 create -f testdata/busybox.yaml: exit status 1 (30.146791ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-279000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-279000 -n embed-certs-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-279000 -n embed-certs-279000: exit status 7 (28.296667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-279000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-279000 -n embed-certs-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-279000 -n embed-certs-279000: exit status 7 (28.854708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-279000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-279000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-279000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-279000 describe deploy/metrics-server -n kube-system: exit status 1 (26.18525ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-279000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-279000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-279000 -n embed-certs-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-279000 -n embed-certs-279000: exit status 7 (28.325167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-279000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-279000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-279000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (5.191700167s)

                                                
                                                
-- stdout --
	* [embed-certs-279000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node embed-certs-279000 in cluster embed-certs-279000
	* Restarting existing qemu2 VM for "embed-certs-279000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-279000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:44:01.558455    5322 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:44:01.558561    5322 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:44:01.558563    5322 out.go:309] Setting ErrFile to fd 2...
	I0719 16:44:01.558566    5322 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:44:01.558675    5322 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:44:01.559672    5322 out.go:303] Setting JSON to false
	I0719 16:44:01.574704    5322 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4412,"bootTime":1689805829,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:44:01.574786    5322 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:44:01.579568    5322 out.go:177] * [embed-certs-279000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:44:01.589644    5322 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:44:01.586761    5322 notify.go:220] Checking for updates...
	I0719 16:44:01.597710    5322 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:44:01.605725    5322 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:44:01.613698    5322 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:44:01.621742    5322 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:44:01.629717    5322 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:44:01.633992    5322 config.go:182] Loaded profile config "embed-certs-279000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:44:01.634235    5322 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:44:01.638678    5322 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 16:44:01.645531    5322 start.go:298] selected driver: qemu2
	I0719 16:44:01.645538    5322 start.go:880] validating driver "qemu2" against &{Name:embed-certs-279000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-
279000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:44:01.645600    5322 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:44:01.647666    5322 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 16:44:01.647689    5322 cni.go:84] Creating CNI manager for ""
	I0719 16:44:01.647696    5322 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 16:44:01.647703    5322 start_flags.go:319] config:
	{Name:embed-certs-279000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-279000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:44:01.652047    5322 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:44:01.659771    5322 out.go:177] * Starting control plane node embed-certs-279000 in cluster embed-certs-279000
	I0719 16:44:01.663750    5322 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 16:44:01.663787    5322 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 16:44:01.663802    5322 cache.go:57] Caching tarball of preloaded images
	I0719 16:44:01.663861    5322 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 16:44:01.663869    5322 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0719 16:44:01.663926    5322 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/embed-certs-279000/config.json ...
	I0719 16:44:01.664170    5322 start.go:365] acquiring machines lock for embed-certs-279000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:44:01.664399    5322 start.go:369] acquired machines lock for "embed-certs-279000" in 222.5µs
	I0719 16:44:01.664411    5322 start.go:96] Skipping create...Using existing machine configuration
	I0719 16:44:01.664416    5322 fix.go:54] fixHost starting: 
	I0719 16:44:01.664567    5322 fix.go:102] recreateIfNeeded on embed-certs-279000: state=Stopped err=<nil>
	W0719 16:44:01.664577    5322 fix.go:128] unexpected machine state, will restart: <nil>
	I0719 16:44:01.668772    5322 out.go:177] * Restarting existing qemu2 VM for "embed-certs-279000" ...
	I0719 16:44:01.676784    5322 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/embed-certs-279000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/embed-certs-279000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/embed-certs-279000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:45:e8:62:a8:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/embed-certs-279000/disk.qcow2
	I0719 16:44:01.678706    5322 main.go:141] libmachine: STDOUT: 
	I0719 16:44:01.678721    5322 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:44:01.678751    5322 fix.go:56] fixHost completed within 14.335542ms
	I0719 16:44:01.678756    5322 start.go:83] releasing machines lock for "embed-certs-279000", held for 14.350458ms
	W0719 16:44:01.678765    5322 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:44:01.678812    5322 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:44:01.678816    5322 start.go:687] Will try again in 5 seconds ...
	I0719 16:44:06.680831    5322 start.go:365] acquiring machines lock for embed-certs-279000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:44:06.680921    5322 start.go:369] acquired machines lock for "embed-certs-279000" in 70µs
	I0719 16:44:06.680939    5322 start.go:96] Skipping create...Using existing machine configuration
	I0719 16:44:06.680943    5322 fix.go:54] fixHost starting: 
	I0719 16:44:06.681068    5322 fix.go:102] recreateIfNeeded on embed-certs-279000: state=Stopped err=<nil>
	W0719 16:44:06.681072    5322 fix.go:128] unexpected machine state, will restart: <nil>
	I0719 16:44:06.685274    5322 out.go:177] * Restarting existing qemu2 VM for "embed-certs-279000" ...
	I0719 16:44:06.693290    5322 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/embed-certs-279000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/embed-certs-279000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/embed-certs-279000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:45:e8:62:a8:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/embed-certs-279000/disk.qcow2
	I0719 16:44:06.695281    5322 main.go:141] libmachine: STDOUT: 
	I0719 16:44:06.695298    5322 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:44:06.695319    5322 fix.go:56] fixHost completed within 14.376667ms
	I0719 16:44:06.695325    5322 start.go:83] releasing machines lock for "embed-certs-279000", held for 14.398291ms
	W0719 16:44:06.695394    5322 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-279000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-279000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:44:06.702337    5322 out.go:177] 
	W0719 16:44:06.705311    5322 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:44:06.705320    5322 out.go:239] * 
	* 
	W0719 16:44:06.705796    5322 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:44:06.716071    5322 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-279000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-279000 -n embed-certs-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-279000 -n embed-certs-279000: exit status 7 (30.3065ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-279000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-512000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-512000 -n no-preload-512000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-512000 -n no-preload-512000: exit status 7 (31.799584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-512000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-512000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-512000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-512000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.731ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-512000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-512000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-512000 -n no-preload-512000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-512000 -n no-preload-512000: exit status 7 (28.046834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-512000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p no-preload-512000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p no-preload-512000 "sudo crictl images -o json": exit status 89 (38.499916ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-512000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p no-preload-512000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p no-preload-512000"
start_stop_delete_test.go:304: v1.27.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.3",
- 	"registry.k8s.io/kube-controller-manager:v1.27.3",
- 	"registry.k8s.io/kube-proxy:v1.27.3",
- 	"registry.k8s.io/kube-scheduler:v1.27.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-512000 -n no-preload-512000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-512000 -n no-preload-512000: exit status 7 (27.773959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-512000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-512000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-512000 --alsologtostderr -v=1: exit status 89 (39.557667ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-512000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:44:06.402405    5341 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:44:06.402529    5341 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:44:06.402533    5341 out.go:309] Setting ErrFile to fd 2...
	I0719 16:44:06.402536    5341 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:44:06.402649    5341 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:44:06.402842    5341 out.go:303] Setting JSON to false
	I0719 16:44:06.402851    5341 mustload.go:65] Loading cluster: no-preload-512000
	I0719 16:44:06.403019    5341 config.go:182] Loaded profile config "no-preload-512000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:44:06.407275    5341 out.go:177] * The control plane node must be running for this command
	I0719 16:44:06.411401    5341 out.go:177]   To start a cluster, run: "minikube start -p no-preload-512000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-512000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-512000 -n no-preload-512000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-512000 -n no-preload-512000: exit status 7 (27.959708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-512000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-512000 -n no-preload-512000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-512000 -n no-preload-512000: exit status 7 (27.759167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-512000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-279000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-279000 -n embed-certs-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-279000 -n embed-certs-279000: exit status 7 (29.861916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-279000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-279000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-279000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-279000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.654125ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-279000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-279000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-279000 -n embed-certs-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-279000 -n embed-certs-279000: exit status 7 (29.743ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-279000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p embed-certs-279000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p embed-certs-279000 "sudo crictl images -o json": exit status 89 (38.764792ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-279000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p embed-certs-279000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p embed-certs-279000"
start_stop_delete_test.go:304: v1.27.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.3",
- 	"registry.k8s.io/kube-controller-manager:v1.27.3",
- 	"registry.k8s.io/kube-proxy:v1.27.3",
- 	"registry.k8s.io/kube-scheduler:v1.27.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-279000 -n embed-certs-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-279000 -n embed-certs-279000: exit status 7 (30.66375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-279000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-279000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-279000 --alsologtostderr -v=1: exit status 89 (38.83ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-279000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:44:06.940070    5380 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:44:06.940189    5380 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:44:06.940194    5380 out.go:309] Setting ErrFile to fd 2...
	I0719 16:44:06.940196    5380 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:44:06.940309    5380 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:44:06.940511    5380 out.go:303] Setting JSON to false
	I0719 16:44:06.940522    5380 mustload.go:65] Loading cluster: embed-certs-279000
	I0719 16:44:06.940705    5380 config.go:182] Loaded profile config "embed-certs-279000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:44:06.944338    5380 out.go:177] * The control plane node must be running for this command
	I0719 16:44:06.947387    5380 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-279000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-279000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-279000 -n embed-certs-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-279000 -n embed-certs-279000: exit status 7 (29.656416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-279000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-279000 -n embed-certs-279000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-279000 -n embed-certs-279000: exit status 7 (29.232667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-279000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-111000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-111000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (9.72334175s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-111000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node default-k8s-diff-port-111000 in cluster default-k8s-diff-port-111000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-111000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:44:07.138700    5399 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:44:07.138824    5399 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:44:07.138827    5399 out.go:309] Setting ErrFile to fd 2...
	I0719 16:44:07.138829    5399 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:44:07.138941    5399 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:44:07.140021    5399 out.go:303] Setting JSON to false
	I0719 16:44:07.156860    5399 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4418,"bootTime":1689805829,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:44:07.156933    5399 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:44:07.161218    5399 out.go:177] * [default-k8s-diff-port-111000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:44:07.171284    5399 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:44:07.167359    5399 notify.go:220] Checking for updates...
	I0719 16:44:07.177276    5399 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:44:07.180219    5399 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:44:07.183261    5399 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:44:07.186268    5399 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:44:07.189266    5399 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:44:07.192616    5399 config.go:182] Loaded profile config "multinode-992000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:44:07.192659    5399 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:44:07.196265    5399 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 16:44:07.203223    5399 start.go:298] selected driver: qemu2
	I0719 16:44:07.203229    5399 start.go:880] validating driver "qemu2" against <nil>
	I0719 16:44:07.203235    5399 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:44:07.205009    5399 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0719 16:44:07.209265    5399 out.go:177] * Automatically selected the socket_vmnet network
	I0719 16:44:07.212363    5399 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 16:44:07.212390    5399 cni.go:84] Creating CNI manager for ""
	I0719 16:44:07.212396    5399 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 16:44:07.212399    5399 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 16:44:07.212405    5399 start_flags.go:319] config:
	{Name:default-k8s-diff-port-111000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-111000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAg
entPID:0}
	I0719 16:44:07.216543    5399 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:44:07.227258    5399 out.go:177] * Starting control plane node default-k8s-diff-port-111000 in cluster default-k8s-diff-port-111000
	I0719 16:44:07.231238    5399 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 16:44:07.231264    5399 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 16:44:07.231278    5399 cache.go:57] Caching tarball of preloaded images
	I0719 16:44:07.231350    5399 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 16:44:07.231356    5399 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0719 16:44:07.231412    5399 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/default-k8s-diff-port-111000/config.json ...
	I0719 16:44:07.231424    5399 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/default-k8s-diff-port-111000/config.json: {Name:mkdd4f0ae5a8f363bb9f10e49b93cd7958d94b20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:44:07.231643    5399 start.go:365] acquiring machines lock for default-k8s-diff-port-111000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:44:07.231676    5399 start.go:369] acquired machines lock for "default-k8s-diff-port-111000" in 21µs
	I0719 16:44:07.231687    5399 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-111000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterNam
e:default-k8s-diff-port-111000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:44:07.231724    5399 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:44:07.235251    5399 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 16:44:07.249462    5399 start.go:159] libmachine.API.Create for "default-k8s-diff-port-111000" (driver="qemu2")
	I0719 16:44:07.249484    5399 client.go:168] LocalClient.Create starting
	I0719 16:44:07.249567    5399 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:44:07.249589    5399 main.go:141] libmachine: Decoding PEM data...
	I0719 16:44:07.249598    5399 main.go:141] libmachine: Parsing certificate...
	I0719 16:44:07.249631    5399 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:44:07.249646    5399 main.go:141] libmachine: Decoding PEM data...
	I0719 16:44:07.249653    5399 main.go:141] libmachine: Parsing certificate...
	I0719 16:44:07.249951    5399 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:44:07.370831    5399 main.go:141] libmachine: Creating SSH key...
	I0719 16:44:07.460643    5399 main.go:141] libmachine: Creating Disk image...
	I0719 16:44:07.460653    5399 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:44:07.460812    5399 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/default-k8s-diff-port-111000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/default-k8s-diff-port-111000/disk.qcow2
	I0719 16:44:07.482131    5399 main.go:141] libmachine: STDOUT: 
	I0719 16:44:07.482141    5399 main.go:141] libmachine: STDERR: 
	I0719 16:44:07.482196    5399 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/default-k8s-diff-port-111000/disk.qcow2 +20000M
	I0719 16:44:07.495712    5399 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:44:07.495736    5399 main.go:141] libmachine: STDERR: 
	I0719 16:44:07.495757    5399 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/default-k8s-diff-port-111000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/default-k8s-diff-port-111000/disk.qcow2
	I0719 16:44:07.495770    5399 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:44:07.495813    5399 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/default-k8s-diff-port-111000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/default-k8s-diff-port-111000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/default-k8s-diff-port-111000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:2c:5b:5c:87:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/default-k8s-diff-port-111000/disk.qcow2
	I0719 16:44:07.497381    5399 main.go:141] libmachine: STDOUT: 
	I0719 16:44:07.497393    5399 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:44:07.497411    5399 client.go:171] LocalClient.Create took 247.926375ms
	I0719 16:44:09.499578    5399 start.go:128] duration metric: createHost completed in 2.267849375s
	I0719 16:44:09.499671    5399 start.go:83] releasing machines lock for "default-k8s-diff-port-111000", held for 2.26800775s
	W0719 16:44:09.499801    5399 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:44:09.511057    5399 out.go:177] * Deleting "default-k8s-diff-port-111000" in qemu2 ...
	W0719 16:44:09.531711    5399 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:44:09.531738    5399 start.go:687] Will try again in 5 seconds ...
	I0719 16:44:14.532052    5399 start.go:365] acquiring machines lock for default-k8s-diff-port-111000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:44:14.532670    5399 start.go:369] acquired machines lock for "default-k8s-diff-port-111000" in 472.75µs
	I0719 16:44:14.532814    5399 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-111000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterNam
e:default-k8s-diff-port-111000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:44:14.533095    5399 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:44:14.538891    5399 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 16:44:14.586350    5399 start.go:159] libmachine.API.Create for "default-k8s-diff-port-111000" (driver="qemu2")
	I0719 16:44:14.586389    5399 client.go:168] LocalClient.Create starting
	I0719 16:44:14.586524    5399 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:44:14.586576    5399 main.go:141] libmachine: Decoding PEM data...
	I0719 16:44:14.586597    5399 main.go:141] libmachine: Parsing certificate...
	I0719 16:44:14.586687    5399 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:44:14.586715    5399 main.go:141] libmachine: Decoding PEM data...
	I0719 16:44:14.586732    5399 main.go:141] libmachine: Parsing certificate...
	I0719 16:44:14.587248    5399 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:44:14.717245    5399 main.go:141] libmachine: Creating SSH key...
	I0719 16:44:14.774420    5399 main.go:141] libmachine: Creating Disk image...
	I0719 16:44:14.774425    5399 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:44:14.774574    5399 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/default-k8s-diff-port-111000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/default-k8s-diff-port-111000/disk.qcow2
	I0719 16:44:14.783019    5399 main.go:141] libmachine: STDOUT: 
	I0719 16:44:14.783032    5399 main.go:141] libmachine: STDERR: 
	I0719 16:44:14.783089    5399 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/default-k8s-diff-port-111000/disk.qcow2 +20000M
	I0719 16:44:14.790147    5399 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:44:14.790158    5399 main.go:141] libmachine: STDERR: 
	I0719 16:44:14.790172    5399 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/default-k8s-diff-port-111000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/default-k8s-diff-port-111000/disk.qcow2
	I0719 16:44:14.790178    5399 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:44:14.790222    5399 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/default-k8s-diff-port-111000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/default-k8s-diff-port-111000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/default-k8s-diff-port-111000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:cb:f1:fa:b9:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/default-k8s-diff-port-111000/disk.qcow2
	I0719 16:44:14.791711    5399 main.go:141] libmachine: STDOUT: 
	I0719 16:44:14.791723    5399 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:44:14.791735    5399 client.go:171] LocalClient.Create took 205.34325ms
	I0719 16:44:16.793866    5399 start.go:128] duration metric: createHost completed in 2.260758209s
	I0719 16:44:16.793933    5399 start.go:83] releasing machines lock for "default-k8s-diff-port-111000", held for 2.261258542s
	W0719 16:44:16.794324    5399 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-111000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-111000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:44:16.804819    5399 out.go:177] 
	W0719 16:44:16.808901    5399 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:44:16.808925    5399 out.go:239] * 
	* 
	W0719 16:44:16.811445    5399 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:44:16.819854    5399 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-111000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-111000 -n default-k8s-diff-port-111000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-111000 -n default-k8s-diff-port-111000: exit status 7 (66.88625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-111000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-135000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-135000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (12.018917541s)

                                                
                                                
-- stdout --
	* [newest-cni-135000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node newest-cni-135000 in cluster newest-cni-135000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-135000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:44:07.444206    5418 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:44:07.444331    5418 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:44:07.444334    5418 out.go:309] Setting ErrFile to fd 2...
	I0719 16:44:07.444340    5418 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:44:07.444447    5418 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:44:07.445432    5418 out.go:303] Setting JSON to false
	I0719 16:44:07.460805    5418 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4418,"bootTime":1689805829,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:44:07.460868    5418 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:44:07.465246    5418 out.go:177] * [newest-cni-135000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:44:07.481398    5418 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:44:07.473388    5418 notify.go:220] Checking for updates...
	I0719 16:44:07.489283    5418 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:44:07.499209    5418 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:44:07.502233    5418 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:44:07.505311    5418 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:44:07.508283    5418 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:44:07.511563    5418 config.go:182] Loaded profile config "default-k8s-diff-port-111000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:44:07.511623    5418 config.go:182] Loaded profile config "multinode-992000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:44:07.511669    5418 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:44:07.516267    5418 out.go:177] * Using the qemu2 driver based on user configuration
	I0719 16:44:07.523266    5418 start.go:298] selected driver: qemu2
	I0719 16:44:07.523270    5418 start.go:880] validating driver "qemu2" against <nil>
	I0719 16:44:07.523275    5418 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:44:07.525928    5418 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	W0719 16:44:07.525951    5418 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0719 16:44:07.534233    5418 out.go:177] * Automatically selected the socket_vmnet network
	I0719 16:44:07.537346    5418 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0719 16:44:07.537364    5418 cni.go:84] Creating CNI manager for ""
	I0719 16:44:07.537369    5418 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 16:44:07.537372    5418 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0719 16:44:07.537377    5418 start_flags.go:319] config:
	{Name:newest-cni-135000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-135000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Network
Plugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Socke
tVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:44:07.541518    5418 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:44:07.544314    5418 out.go:177] * Starting control plane node newest-cni-135000 in cluster newest-cni-135000
	I0719 16:44:07.552282    5418 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 16:44:07.552308    5418 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 16:44:07.552319    5418 cache.go:57] Caching tarball of preloaded images
	I0719 16:44:07.552371    5418 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 16:44:07.552376    5418 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0719 16:44:07.552456    5418 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/newest-cni-135000/config.json ...
	I0719 16:44:07.552472    5418 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/newest-cni-135000/config.json: {Name:mkcd9205690017145622829ed8aaca26205d9ff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 16:44:07.552662    5418 start.go:365] acquiring machines lock for newest-cni-135000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:44:09.499834    5418 start.go:369] acquired machines lock for "newest-cni-135000" in 1.94715775s
	I0719 16:44:09.500004    5418 start.go:93] Provisioning new machine with config: &{Name:newest-cni-135000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cn
i-135000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:44:09.500316    5418 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:44:09.506166    5418 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 16:44:09.551960    5418 start.go:159] libmachine.API.Create for "newest-cni-135000" (driver="qemu2")
	I0719 16:44:09.552008    5418 client.go:168] LocalClient.Create starting
	I0719 16:44:09.552177    5418 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:44:09.552227    5418 main.go:141] libmachine: Decoding PEM data...
	I0719 16:44:09.552255    5418 main.go:141] libmachine: Parsing certificate...
	I0719 16:44:09.552349    5418 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:44:09.552386    5418 main.go:141] libmachine: Decoding PEM data...
	I0719 16:44:09.552407    5418 main.go:141] libmachine: Parsing certificate...
	I0719 16:44:09.553087    5418 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:44:09.689576    5418 main.go:141] libmachine: Creating SSH key...
	I0719 16:44:09.865030    5418 main.go:141] libmachine: Creating Disk image...
	I0719 16:44:09.865037    5418 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:44:09.865211    5418 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/newest-cni-135000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/newest-cni-135000/disk.qcow2
	I0719 16:44:09.874256    5418 main.go:141] libmachine: STDOUT: 
	I0719 16:44:09.874270    5418 main.go:141] libmachine: STDERR: 
	I0719 16:44:09.874342    5418 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/newest-cni-135000/disk.qcow2 +20000M
	I0719 16:44:09.881464    5418 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:44:09.881480    5418 main.go:141] libmachine: STDERR: 
	I0719 16:44:09.881505    5418 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/newest-cni-135000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/newest-cni-135000/disk.qcow2
	I0719 16:44:09.881511    5418 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:44:09.881579    5418 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/newest-cni-135000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/newest-cni-135000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/newest-cni-135000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:07:cc:85:b4:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/newest-cni-135000/disk.qcow2
	I0719 16:44:09.883124    5418 main.go:141] libmachine: STDOUT: 
	I0719 16:44:09.883144    5418 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:44:09.883158    5418 client.go:171] LocalClient.Create took 331.142708ms
	I0719 16:44:11.885321    5418 start.go:128] duration metric: createHost completed in 2.384943333s
	I0719 16:44:11.885373    5418 start.go:83] releasing machines lock for "newest-cni-135000", held for 2.38552875s
	W0719 16:44:11.885473    5418 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:44:11.892775    5418 out.go:177] * Deleting "newest-cni-135000" in qemu2 ...
	W0719 16:44:11.918578    5418 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:44:11.918604    5418 start.go:687] Will try again in 5 seconds ...
	I0719 16:44:16.920641    5418 start.go:365] acquiring machines lock for newest-cni-135000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:44:16.920732    5418 start.go:369] acquired machines lock for "newest-cni-135000" in 65.291µs
	I0719 16:44:16.920763    5418 start.go:93] Provisioning new machine with config: &{Name:newest-cni-135000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cn
i-135000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0719 16:44:16.920801    5418 start.go:125] createHost starting for "" (driver="qemu2")
	I0719 16:44:16.928528    5418 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0719 16:44:16.942783    5418 start.go:159] libmachine.API.Create for "newest-cni-135000" (driver="qemu2")
	I0719 16:44:16.942806    5418 client.go:168] LocalClient.Create starting
	I0719 16:44:16.942866    5418 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/ca.pem
	I0719 16:44:16.942890    5418 main.go:141] libmachine: Decoding PEM data...
	I0719 16:44:16.942900    5418 main.go:141] libmachine: Parsing certificate...
	I0719 16:44:16.942941    5418 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15585-1056/.minikube/certs/cert.pem
	I0719 16:44:16.942957    5418 main.go:141] libmachine: Decoding PEM data...
	I0719 16:44:16.942967    5418 main.go:141] libmachine: Parsing certificate...
	I0719 16:44:16.943237    5418 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso...
	I0719 16:44:17.105847    5418 main.go:141] libmachine: Creating SSH key...
	I0719 16:44:17.346133    5418 main.go:141] libmachine: Creating Disk image...
	I0719 16:44:17.346141    5418 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0719 16:44:17.348629    5418 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/newest-cni-135000/disk.qcow2.raw /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/newest-cni-135000/disk.qcow2
	I0719 16:44:17.362556    5418 main.go:141] libmachine: STDOUT: 
	I0719 16:44:17.362570    5418 main.go:141] libmachine: STDERR: 
	I0719 16:44:17.362631    5418 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/newest-cni-135000/disk.qcow2 +20000M
	I0719 16:44:17.378451    5418 main.go:141] libmachine: STDOUT: Image resized.
	
	I0719 16:44:17.378465    5418 main.go:141] libmachine: STDERR: 
	I0719 16:44:17.378481    5418 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/newest-cni-135000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/newest-cni-135000/disk.qcow2
	I0719 16:44:17.378492    5418 main.go:141] libmachine: Starting QEMU VM...
	I0719 16:44:17.378539    5418 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/newest-cni-135000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/newest-cni-135000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/newest-cni-135000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:c3:3c:f7:e8:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/newest-cni-135000/disk.qcow2
	I0719 16:44:17.380087    5418 main.go:141] libmachine: STDOUT: 
	I0719 16:44:17.380105    5418 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:44:17.380120    5418 client.go:171] LocalClient.Create took 437.315ms
	I0719 16:44:19.382289    5418 start.go:128] duration metric: createHost completed in 2.461489542s
	I0719 16:44:19.382355    5418 start.go:83] releasing machines lock for "newest-cni-135000", held for 2.461639583s
	W0719 16:44:19.382785    5418 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-135000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-135000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:44:19.391377    5418 out.go:177] 
	W0719 16:44:19.403419    5418 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:44:19.403441    5418 out.go:239] * 
	* 
	W0719 16:44:19.406448    5418 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:44:19.418318    5418 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-135000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-135000 -n newest-cni-135000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-135000 -n newest-cni-135000: exit status 7 (61.761667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-135000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (12.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-111000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-111000 create -f testdata/busybox.yaml: exit status 1 (29.306875ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-111000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-111000 -n default-k8s-diff-port-111000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-111000 -n default-k8s-diff-port-111000: exit status 7 (31.436834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-111000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-111000 -n default-k8s-diff-port-111000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-111000 -n default-k8s-diff-port-111000: exit status 7 (33.140208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-111000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-111000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-111000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-111000 describe deploy/metrics-server -n kube-system: exit status 1 (29.509166ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-111000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-111000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-111000 -n default-k8s-diff-port-111000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-111000 -n default-k8s-diff-port-111000: exit status 7 (29.26425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-111000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (7.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-111000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-111000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (7.183873292s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-111000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-111000 in cluster default-k8s-diff-port-111000
	* Restarting existing qemu2 VM for "default-k8s-diff-port-111000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-111000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:44:17.320649    5454 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:44:17.320772    5454 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:44:17.320775    5454 out.go:309] Setting ErrFile to fd 2...
	I0719 16:44:17.320777    5454 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:44:17.320903    5454 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:44:17.321848    5454 out.go:303] Setting JSON to false
	I0719 16:44:17.337532    5454 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4428,"bootTime":1689805829,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:44:17.337595    5454 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:44:17.342582    5454 out.go:177] * [default-k8s-diff-port-111000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:44:17.352568    5454 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:44:17.348597    5454 notify.go:220] Checking for updates...
	I0719 16:44:17.358515    5454 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:44:17.365520    5454 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:44:17.371530    5454 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:44:17.382442    5454 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:44:17.386552    5454 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:44:17.390779    5454 config.go:182] Loaded profile config "default-k8s-diff-port-111000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:44:17.391034    5454 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:44:17.394476    5454 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 16:44:17.401526    5454 start.go:298] selected driver: qemu2
	I0719 16:44:17.401530    5454 start.go:880] validating driver "qemu2" against &{Name:default-k8s-diff-port-111000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:d
efault-k8s-diff-port-111000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:44:17.401580    5454 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:44:17.403310    5454 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0719 16:44:17.403341    5454 cni.go:84] Creating CNI manager for ""
	I0719 16:44:17.403347    5454 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 16:44:17.403352    5454 start_flags.go:319] config:
	{Name:default-k8s-diff-port-111000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-111000 Namespace:default APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:44:17.407241    5454 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:44:17.414525    5454 out.go:177] * Starting control plane node default-k8s-diff-port-111000 in cluster default-k8s-diff-port-111000
	I0719 16:44:17.418587    5454 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 16:44:17.418617    5454 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 16:44:17.418635    5454 cache.go:57] Caching tarball of preloaded images
	I0719 16:44:17.418697    5454 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 16:44:17.418709    5454 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0719 16:44:17.418772    5454 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/default-k8s-diff-port-111000/config.json ...
	I0719 16:44:17.419083    5454 start.go:365] acquiring machines lock for default-k8s-diff-port-111000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:44:19.382521    5454 start.go:369] acquired machines lock for "default-k8s-diff-port-111000" in 1.963398625s
	I0719 16:44:19.382711    5454 start.go:96] Skipping create...Using existing machine configuration
	I0719 16:44:19.382749    5454 fix.go:54] fixHost starting: 
	I0719 16:44:19.383447    5454 fix.go:102] recreateIfNeeded on default-k8s-diff-port-111000: state=Stopped err=<nil>
	W0719 16:44:19.383488    5454 fix.go:128] unexpected machine state, will restart: <nil>
	I0719 16:44:19.399305    5454 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-111000" ...
	I0719 16:44:19.407459    5454 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/default-k8s-diff-port-111000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/default-k8s-diff-port-111000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/default-k8s-diff-port-111000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:cb:f1:fa:b9:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/default-k8s-diff-port-111000/disk.qcow2
	I0719 16:44:19.416886    5454 main.go:141] libmachine: STDOUT: 
	I0719 16:44:19.416933    5454 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:44:19.417052    5454 fix.go:56] fixHost completed within 34.320834ms
	I0719 16:44:19.417078    5454 start.go:83] releasing machines lock for "default-k8s-diff-port-111000", held for 34.525375ms
	W0719 16:44:19.417115    5454 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:44:19.417263    5454 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:44:19.417278    5454 start.go:687] Will try again in 5 seconds ...
	I0719 16:44:24.419455    5454 start.go:365] acquiring machines lock for default-k8s-diff-port-111000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:44:24.419977    5454 start.go:369] acquired machines lock for "default-k8s-diff-port-111000" in 421.25µs
	I0719 16:44:24.420140    5454 start.go:96] Skipping create...Using existing machine configuration
	I0719 16:44:24.420163    5454 fix.go:54] fixHost starting: 
	I0719 16:44:24.420919    5454 fix.go:102] recreateIfNeeded on default-k8s-diff-port-111000: state=Stopped err=<nil>
	W0719 16:44:24.420947    5454 fix.go:128] unexpected machine state, will restart: <nil>
	I0719 16:44:24.429418    5454 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-111000" ...
	I0719 16:44:24.433691    5454 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/default-k8s-diff-port-111000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/default-k8s-diff-port-111000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/default-k8s-diff-port-111000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:cb:f1:fa:b9:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/default-k8s-diff-port-111000/disk.qcow2
	I0719 16:44:24.442701    5454 main.go:141] libmachine: STDOUT: 
	I0719 16:44:24.442745    5454 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:44:24.442821    5454 fix.go:56] fixHost completed within 22.66225ms
	I0719 16:44:24.442863    5454 start.go:83] releasing machines lock for "default-k8s-diff-port-111000", held for 22.836666ms
	W0719 16:44:24.443028    5454 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-111000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-111000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:44:24.450483    5454 out.go:177] 
	W0719 16:44:24.454587    5454 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:44:24.454633    5454 out.go:239] * 
	* 
	W0719 16:44:24.457168    5454 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:44:24.464511    5454 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-111000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-111000 -n default-k8s-diff-port-111000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-111000 -n default-k8s-diff-port-111000: exit status 7 (67.246042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-111000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (7.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-135000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-135000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (5.166279458s)

                                                
                                                
-- stdout --
	* [newest-cni-135000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node newest-cni-135000 in cluster newest-cni-135000
	* Restarting existing qemu2 VM for "newest-cni-135000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-135000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:44:19.737031    5474 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:44:19.737154    5474 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:44:19.737157    5474 out.go:309] Setting ErrFile to fd 2...
	I0719 16:44:19.737159    5474 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:44:19.737278    5474 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:44:19.738268    5474 out.go:303] Setting JSON to false
	I0719 16:44:19.753582    5474 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4430,"bootTime":1689805829,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:44:19.753642    5474 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:44:19.758467    5474 out.go:177] * [newest-cni-135000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:44:19.761455    5474 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:44:19.765516    5474 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:44:19.761499    5474 notify.go:220] Checking for updates...
	I0719 16:44:19.772423    5474 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:44:19.775456    5474 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:44:19.778387    5474 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:44:19.781475    5474 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:44:19.784762    5474 config.go:182] Loaded profile config "newest-cni-135000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:44:19.785000    5474 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:44:19.789413    5474 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 16:44:19.796442    5474 start.go:298] selected driver: qemu2
	I0719 16:44:19.796446    5474 start.go:880] validating driver "qemu2" against &{Name:newest-cni-135000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-1
35000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmne
t Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:44:19.796514    5474 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:44:19.798343    5474 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0719 16:44:19.798362    5474 cni.go:84] Creating CNI manager for ""
	I0719 16:44:19.798369    5474 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 16:44:19.798375    5474 start_flags.go:319] config:
	{Name:newest-cni-135000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-135000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount
:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:44:19.802313    5474 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 16:44:19.810462    5474 out.go:177] * Starting control plane node newest-cni-135000 in cluster newest-cni-135000
	I0719 16:44:19.814261    5474 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 16:44:19.814282    5474 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 16:44:19.814292    5474 cache.go:57] Caching tarball of preloaded images
	I0719 16:44:19.814361    5474 preload.go:174] Found /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0719 16:44:19.814366    5474 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0719 16:44:19.814435    5474 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/newest-cni-135000/config.json ...
	I0719 16:44:19.814732    5474 start.go:365] acquiring machines lock for newest-cni-135000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:44:19.814759    5474 start.go:369] acquired machines lock for "newest-cni-135000" in 22µs
	I0719 16:44:19.814767    5474 start.go:96] Skipping create...Using existing machine configuration
	I0719 16:44:19.814772    5474 fix.go:54] fixHost starting: 
	I0719 16:44:19.814877    5474 fix.go:102] recreateIfNeeded on newest-cni-135000: state=Stopped err=<nil>
	W0719 16:44:19.814885    5474 fix.go:128] unexpected machine state, will restart: <nil>
	I0719 16:44:19.819521    5474 out.go:177] * Restarting existing qemu2 VM for "newest-cni-135000" ...
	I0719 16:44:19.827436    5474 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/newest-cni-135000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/newest-cni-135000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/newest-cni-135000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:c3:3c:f7:e8:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/newest-cni-135000/disk.qcow2
	I0719 16:44:19.829123    5474 main.go:141] libmachine: STDOUT: 
	I0719 16:44:19.829135    5474 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:44:19.829162    5474 fix.go:56] fixHost completed within 14.390792ms
	I0719 16:44:19.829167    5474 start.go:83] releasing machines lock for "newest-cni-135000", held for 14.404666ms
	W0719 16:44:19.829174    5474 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:44:19.829217    5474 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:44:19.829221    5474 start.go:687] Will try again in 5 seconds ...
	I0719 16:44:24.831200    5474 start.go:365] acquiring machines lock for newest-cni-135000: {Name:mk2e66579cab1d0bf6ca5b14581a0b96879f7a30 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0719 16:44:24.831280    5474 start.go:369] acquired machines lock for "newest-cni-135000" in 42.542µs
	I0719 16:44:24.831299    5474 start.go:96] Skipping create...Using existing machine configuration
	I0719 16:44:24.831303    5474 fix.go:54] fixHost starting: 
	I0719 16:44:24.831430    5474 fix.go:102] recreateIfNeeded on newest-cni-135000: state=Stopped err=<nil>
	W0719 16:44:24.831435    5474 fix.go:128] unexpected machine state, will restart: <nil>
	I0719 16:44:24.839810    5474 out.go:177] * Restarting existing qemu2 VM for "newest-cni-135000" ...
	I0719 16:44:24.843845    5474 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/newest-cni-135000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/newest-cni-135000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/newest-cni-135000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:c3:3c:f7:e8:e0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/newest-cni-135000/disk.qcow2
	I0719 16:44:24.845859    5474 main.go:141] libmachine: STDOUT: 
	I0719 16:44:24.845872    5474 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0719 16:44:24.845892    5474 fix.go:56] fixHost completed within 14.589625ms
	I0719 16:44:24.845898    5474 start.go:83] releasing machines lock for "newest-cni-135000", held for 14.61375ms
	W0719 16:44:24.845947    5474 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-135000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-135000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0719 16:44:24.852871    5474 out.go:177] 
	W0719 16:44:24.855884    5474 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0719 16:44:24.855897    5474 out.go:239] * 
	* 
	W0719 16:44:24.856362    5474 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 16:44:24.871811    5474 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-135000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-135000 -n newest-cni-135000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-135000 -n newest-cni-135000: exit status 7 (29.365708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-135000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-111000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-111000 -n default-k8s-diff-port-111000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-111000 -n default-k8s-diff-port-111000: exit status 7 (30.76925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-111000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-111000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-111000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-111000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.771083ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-111000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-111000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-111000 -n default-k8s-diff-port-111000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-111000 -n default-k8s-diff-port-111000: exit status 7 (28.188208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-111000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-111000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-111000 "sudo crictl images -o json": exit status 89 (39.345167ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-111000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-111000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p default-k8s-diff-port-111000"
start_stop_delete_test.go:304: v1.27.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.3",
- 	"registry.k8s.io/kube-controller-manager:v1.27.3",
- 	"registry.k8s.io/kube-proxy:v1.27.3",
- 	"registry.k8s.io/kube-scheduler:v1.27.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-111000 -n default-k8s-diff-port-111000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-111000 -n default-k8s-diff-port-111000: exit status 7 (27.627291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-111000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-111000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-111000 --alsologtostderr -v=1: exit status 89 (39.957209ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-111000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:44:24.726162    5493 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:44:24.726307    5493 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:44:24.726310    5493 out.go:309] Setting ErrFile to fd 2...
	I0719 16:44:24.726313    5493 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:44:24.726417    5493 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:44:24.726614    5493 out.go:303] Setting JSON to false
	I0719 16:44:24.726623    5493 mustload.go:65] Loading cluster: default-k8s-diff-port-111000
	I0719 16:44:24.726789    5493 config.go:182] Loaded profile config "default-k8s-diff-port-111000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:44:24.730882    5493 out.go:177] * The control plane node must be running for this command
	I0719 16:44:24.734955    5493 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-111000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-111000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-111000 -n default-k8s-diff-port-111000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-111000 -n default-k8s-diff-port-111000: exit status 7 (27.709959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-111000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-111000 -n default-k8s-diff-port-111000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-111000 -n default-k8s-diff-port-111000: exit status 7 (28.0285ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-111000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p newest-cni-135000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p newest-cni-135000 "sudo crictl images -o json": exit status 89 (47.888292ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-135000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p newest-cni-135000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p newest-cni-135000"
start_stop_delete_test.go:304: v1.27.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.3",
- 	"registry.k8s.io/kube-controller-manager:v1.27.3",
- 	"registry.k8s.io/kube-proxy:v1.27.3",
- 	"registry.k8s.io/kube-scheduler:v1.27.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-135000 -n newest-cni-135000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-135000 -n newest-cni-135000: exit status 7 (29.667584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-135000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-135000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-135000 --alsologtostderr -v=1: exit status 89 (38.898125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-135000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:44:25.011862    5514 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:44:25.012835    5514 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:44:25.012839    5514 out.go:309] Setting ErrFile to fd 2...
	I0719 16:44:25.012842    5514 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:44:25.012951    5514 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:44:25.013137    5514 out.go:303] Setting JSON to false
	I0719 16:44:25.013147    5514 mustload.go:65] Loading cluster: newest-cni-135000
	I0719 16:44:25.013330    5514 config.go:182] Loaded profile config "newest-cni-135000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:44:25.015822    5514 out.go:177] * The control plane node must be running for this command
	I0719 16:44:25.019013    5514 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-135000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-135000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-135000 -n newest-cni-135000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-135000 -n newest-cni-135000: exit status 7 (29.87825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-135000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-135000 -n newest-cni-135000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-135000 -n newest-cni-135000: exit status 7 (30.135292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-135000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (142/255)

Order passed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.27.3/json-events 11.79
11 TestDownloadOnly/v1.27.3/preload-exists 0
14 TestDownloadOnly/v1.27.3/kubectl 0
15 TestDownloadOnly/v1.27.3/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.28
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.23
19 TestBinaryMirror 0.37
22 TestAddons/Setup 404.09
27 TestAddons/parallel/MetricsServer 5.3
31 TestAddons/parallel/Headlamp 14.48
35 TestAddons/serial/GCPAuth/Namespaces 0.07
36 TestAddons/StoppedEnableDisable 12.27
44 TestHyperKitDriverInstallOrUpdate 8.4
47 TestErrorSpam/setup 29.6
48 TestErrorSpam/start 0.36
49 TestErrorSpam/status 0.27
50 TestErrorSpam/pause 0.71
51 TestErrorSpam/unpause 0.62
52 TestErrorSpam/stop 12.25
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 53.51
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 37.06
59 TestFunctional/serial/KubeContext 0.03
60 TestFunctional/serial/KubectlGetPods 0.05
63 TestFunctional/serial/CacheCmd/cache/add_remote 5.8
64 TestFunctional/serial/CacheCmd/cache/add_local 1.18
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
66 TestFunctional/serial/CacheCmd/cache/list 0.03
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.26
69 TestFunctional/serial/CacheCmd/cache/delete 0.06
70 TestFunctional/serial/MinikubeKubectlCmd 0.43
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.56
72 TestFunctional/serial/ExtraConfig 31.59
73 TestFunctional/serial/ComponentHealth 0.04
74 TestFunctional/serial/LogsCmd 0.66
75 TestFunctional/serial/LogsFileCmd 0.61
76 TestFunctional/serial/InvalidService 3.62
78 TestFunctional/parallel/ConfigCmd 0.19
79 TestFunctional/parallel/DashboardCmd 9.26
80 TestFunctional/parallel/DryRun 0.22
81 TestFunctional/parallel/InternationalLanguage 0.11
82 TestFunctional/parallel/StatusCmd 0.25
87 TestFunctional/parallel/AddonsCmd 0.12
88 TestFunctional/parallel/PersistentVolumeClaim 27.14
90 TestFunctional/parallel/SSHCmd 0.12
91 TestFunctional/parallel/CpCmd 0.26
93 TestFunctional/parallel/FileSync 0.07
94 TestFunctional/parallel/CertSync 0.38
98 TestFunctional/parallel/NodeLabels 0.04
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.06
102 TestFunctional/parallel/License 0.63
103 TestFunctional/parallel/Version/short 0.06
104 TestFunctional/parallel/Version/components 0.17
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
109 TestFunctional/parallel/ImageCommands/ImageBuild 2.84
110 TestFunctional/parallel/ImageCommands/Setup 2.69
111 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.16
112 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.54
113 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.73
114 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.51
115 TestFunctional/parallel/ImageCommands/ImageRemove 0.15
116 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.61
117 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.58
118 TestFunctional/parallel/DockerEnv/bash 0.4
119 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
120 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
121 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.13
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
128 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
131 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
132 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
133 TestFunctional/parallel/MountCmd/any-port 6.3
134 TestFunctional/parallel/MountCmd/specific-port 1.07
135 TestFunctional/parallel/MountCmd/VerifyCleanup 1
136 TestFunctional/parallel/ServiceCmd/DeployApp 8.09
137 TestFunctional/parallel/ServiceCmd/List 0.32
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.29
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.12
140 TestFunctional/parallel/ServiceCmd/Format 0.1
141 TestFunctional/parallel/ServiceCmd/URL 0.1
142 TestFunctional/parallel/ProfileCmd/profile_not_create 0.18
143 TestFunctional/parallel/ProfileCmd/profile_list 0.15
144 TestFunctional/parallel/ProfileCmd/profile_json_output 0.14
145 TestFunctional/delete_addon-resizer_images 0.11
146 TestFunctional/delete_my-image_image 0.04
147 TestFunctional/delete_minikube_cached_images 0.04
151 TestImageBuild/serial/Setup 28.82
152 TestImageBuild/serial/NormalBuild 2.06
154 TestImageBuild/serial/BuildWithDockerIgnore 0.11
155 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.1
158 TestIngressAddonLegacy/StartLegacyK8sCluster 71.55
160 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 15.87
161 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.23
165 TestJSONOutput/start/Command 46.29
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
171 TestJSONOutput/pause/Command 0.32
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
177 TestJSONOutput/unpause/Command 0.22
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 12.08
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.33
193 TestMainNoArgs 0.03
194 TestMinikubeProfile 61.48
250 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
254 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
255 TestNoKubernetes/serial/ProfileList 0.15
256 TestNoKubernetes/serial/Stop 0.06
258 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
272 TestStartStop/group/old-k8s-version/serial/Stop 0.06
273 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.09
287 TestStartStop/group/no-preload/serial/Stop 0.07
288 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.09
292 TestStartStop/group/embed-certs/serial/Stop 0.06
293 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.09
309 TestStartStop/group/default-k8s-diff-port/serial/Stop 0.06
310 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.09
312 TestStartStop/group/newest-cni/serial/DeployApp 0
313 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
314 TestStartStop/group/newest-cni/serial/Stop 0.06
315 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.09
321 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
322 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-744000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-744000: exit status 85 (93.068417ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-744000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT |          |
	|         | -p download-only-744000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/19 15:51:12
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 15:51:12.099826    1472 out.go:296] Setting OutFile to fd 1 ...
	I0719 15:51:12.099941    1472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 15:51:12.099946    1472 out.go:309] Setting ErrFile to fd 2...
	I0719 15:51:12.099949    1472 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 15:51:12.100064    1472 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	W0719 15:51:12.100124    1472 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/15585-1056/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15585-1056/.minikube/config/config.json: no such file or directory
	I0719 15:51:12.101214    1472 out.go:303] Setting JSON to true
	I0719 15:51:12.117276    1472 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1243,"bootTime":1689805829,"procs":411,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 15:51:12.117355    1472 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 15:51:12.122226    1472 out.go:97] [download-only-744000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 15:51:12.125171    1472 out.go:169] MINIKUBE_LOCATION=15585
	I0719 15:51:12.122360    1472 notify.go:220] Checking for updates...
	W0719 15:51:12.122367    1472 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball: no such file or directory
	I0719 15:51:12.131133    1472 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 15:51:12.134170    1472 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 15:51:12.137136    1472 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 15:51:12.140165    1472 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	W0719 15:51:12.146117    1472 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 15:51:12.146319    1472 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 15:51:12.152220    1472 out.go:97] Using the qemu2 driver based on user configuration
	I0719 15:51:12.152243    1472 start.go:298] selected driver: qemu2
	I0719 15:51:12.152246    1472 start.go:880] validating driver "qemu2" against <nil>
	I0719 15:51:12.152334    1472 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0719 15:51:12.154333    1472 out.go:169] Automatically selected the socket_vmnet network
	I0719 15:51:12.159385    1472 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0719 15:51:12.159455    1472 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0719 15:51:12.159508    1472 cni.go:84] Creating CNI manager for ""
	I0719 15:51:12.159522    1472 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0719 15:51:12.159531    1472 start_flags.go:319] config:
	{Name:download-only-744000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-744000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: N
etworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 15:51:12.165087    1472 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:51:12.169194    1472 out.go:97] Downloading VM boot image ...
	I0719 15:51:12.169226    1472 download.go:107] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/iso/arm64/minikube-v1.31.0-arm64.iso
	I0719 15:51:22.049392    1472 out.go:97] Starting control plane node download-only-744000 in cluster download-only-744000
	I0719 15:51:22.049404    1472 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0719 15:51:22.146428    1472 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0719 15:51:22.146469    1472 cache.go:57] Caching tarball of preloaded images
	I0719 15:51:22.146678    1472 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0719 15:51:22.151790    1472 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0719 15:51:22.151799    1472 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0719 15:51:22.378262    1472 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0719 15:51:31.802635    1472 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0719 15:51:31.802771    1472 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0719 15:51:32.442477    1472 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0719 15:51:32.442679    1472 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/download-only-744000/config.json ...
	I0719 15:51:32.442699    1472 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/download-only-744000/config.json: {Name:mk9bad5674d07bb0011804ae23f3f05ea64dfd4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0719 15:51:32.442930    1472 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0719 15:51:32.443154    1472 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0719 15:51:33.027281    1472 out.go:169] 
	W0719 15:51:33.032206    1472 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/15585-1056/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x107f606b0 0x107f606b0 0x107f606b0 0x107f606b0 0x107f606b0 0x107f606b0 0x107f606b0] Decompressors:map[bz2:0x14000122848 gz:0x140001228a0 tar:0x14000122850 tar.bz2:0x14000122860 tar.gz:0x14000122870 tar.xz:0x14000122880 tar.zst:0x14000122890 tbz2:0x14000122860 tgz:0x14000122870 txz:0x14000122880 tzst:0x14000122890 xz:0x140001228a8 zip:0x140001228b0 zst:0x140001228c0] Getters:map[file:0x1400065cc30 http:0x14000a4a190 https:0x14000a4a1e0] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0719 15:51:33.032230    1472 out_reason.go:110] 
	W0719 15:51:33.039302    1472 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0719 15:51:33.043265    1472 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-744000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/json-events (11.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-744000 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-744000 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=docker --driver=qemu2 : (11.788949917s)
--- PASS: TestDownloadOnly/v1.27.3/json-events (11.79s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/preload-exists
--- PASS: TestDownloadOnly/v1.27.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/kubectl
--- PASS: TestDownloadOnly/v1.27.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-744000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-744000: exit status 85 (70.879708ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-744000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT |          |
	|         | -p download-only-744000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-744000 | jenkins | v1.31.0 | 19 Jul 23 15:51 PDT |          |
	|         | -p download-only-744000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/07/19 15:51:33
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.20.6 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0719 15:51:33.229670    1482 out.go:296] Setting OutFile to fd 1 ...
	I0719 15:51:33.229788    1482 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 15:51:33.229790    1482 out.go:309] Setting ErrFile to fd 2...
	I0719 15:51:33.229793    1482 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 15:51:33.229894    1482 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	W0719 15:51:33.229950    1482 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/15585-1056/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15585-1056/.minikube/config/config.json: no such file or directory
	I0719 15:51:33.230822    1482 out.go:303] Setting JSON to true
	I0719 15:51:33.245588    1482 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1264,"bootTime":1689805829,"procs":408,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 15:51:33.245659    1482 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 15:51:33.251012    1482 out.go:97] [download-only-744000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 15:51:33.255008    1482 out.go:169] MINIKUBE_LOCATION=15585
	I0719 15:51:33.251131    1482 notify.go:220] Checking for updates...
	I0719 15:51:33.260991    1482 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 15:51:33.264025    1482 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 15:51:33.266892    1482 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 15:51:33.270003    1482 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	W0719 15:51:33.274380    1482 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0719 15:51:33.274593    1482 config.go:182] Loaded profile config "download-only-744000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0719 15:51:33.274623    1482 start.go:788] api.Load failed for download-only-744000: filestore "download-only-744000": Docker machine "download-only-744000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0719 15:51:33.274677    1482 driver.go:373] Setting default libvirt URI to qemu:///system
	W0719 15:51:33.274689    1482 start.go:788] api.Load failed for download-only-744000: filestore "download-only-744000": Docker machine "download-only-744000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0719 15:51:33.277988    1482 out.go:97] Using the qemu2 driver based on existing profile
	I0719 15:51:33.277997    1482 start.go:298] selected driver: qemu2
	I0719 15:51:33.277999    1482 start.go:880] validating driver "qemu2" against &{Name:download-only-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-
only-744000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 15:51:33.279829    1482 cni.go:84] Creating CNI manager for ""
	I0719 15:51:33.279843    1482 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0719 15:51:33.279850    1482 start_flags.go:319] config:
	{Name:download-only-744000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:download-only-744000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_v
mnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 15:51:33.283627    1482 iso.go:125] acquiring lock: {Name:mk79bbe185492232dddd2e3d2f1b7eb34761528f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0719 15:51:33.287036    1482 out.go:97] Starting control plane node download-only-744000 in cluster download-only-744000
	I0719 15:51:33.287042    1482 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 15:51:33.507863    1482 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 15:51:33.507915    1482 cache.go:57] Caching tarball of preloaded images
	I0719 15:51:33.508647    1482 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 15:51:33.513585    1482 out.go:97] Downloading Kubernetes v1.27.3 preload ...
	I0719 15:51:33.513612    1482 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 ...
	I0719 15:51:33.715892    1482 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4?checksum=md5:e061b1178966dc348ac19219444153f4 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0719 15:51:41.464838    1482 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 ...
	I0719 15:51:41.464982    1482 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 ...
	I0719 15:51:42.023863    1482 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0719 15:51:42.023934    1482 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/download-only-744000/config.json ...
	I0719 15:51:42.024159    1482 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0719 15:51:42.024317    1482 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.27.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/15585-1056/.minikube/cache/darwin/arm64/v1.27.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-744000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.28s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.28s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-744000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.37s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-101000 --alsologtostderr --binary-mirror http://127.0.0.1:49310 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-101000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-101000
--- PASS: TestBinaryMirror (0.37s)

                                                
                                    
x
+
TestAddons/Setup (404.09s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-101000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Done: out/minikube-darwin-arm64 start -p addons-101000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns: (6m44.088942458s)
--- PASS: TestAddons/Setup (404.09s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 1.904875ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-844d8db974-vt8ml" [a25120a0-a3f2-4a32-851b-21a7b451818f] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.014782333s
addons_test.go:391: (dbg) Run:  kubectl --context addons-101000 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p addons-101000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.30s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-101000 --alsologtostderr -v=1
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-66f6498c69-gdc9w" [71316a7b-210d-4216-a5f7-5db7fc0dca13] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-66f6498c69-gdc9w" [71316a7b-210d-4216-a5f7-5db7fc0dca13] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.010627833s
--- PASS: TestAddons/parallel/Headlamp (14.48s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-101000 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-101000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.27s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-101000
addons_test.go:148: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-101000: (12.081926042s)
addons_test.go:152: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-101000
addons_test.go:156: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-101000
addons_test.go:161: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-101000
--- PASS: TestAddons/StoppedEnableDisable (12.27s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.4s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.40s)

                                                
                                    
x
+
TestErrorSpam/setup (29.6s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-022000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-022000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-022000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-022000 --driver=qemu2 : (29.603068959s)
--- PASS: TestErrorSpam/setup (29.60s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-022000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-022000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-022000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-022000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-022000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-022000 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.27s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-022000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-022000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-022000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-022000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-022000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-022000 status
--- PASS: TestErrorSpam/status (0.27s)

                                                
                                    
x
+
TestErrorSpam/pause (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-022000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-022000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-022000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-022000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-022000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-022000 pause
--- PASS: TestErrorSpam/pause (0.71s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-022000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-022000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-022000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-022000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-022000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-022000 unpause
--- PASS: TestErrorSpam/unpause (0.62s)

                                                
                                    
x
+
TestErrorSpam/stop (12.25s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-022000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-022000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-022000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-022000 stop: (12.080498125s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-022000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-022000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-022000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-022000 stop
--- PASS: TestErrorSpam/stop (12.25s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/15585-1056/.minikube/files/etc/test/nested/copy/1470/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (53.51s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-001000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-001000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (53.510222042s)
--- PASS: TestFunctional/serial/StartWithProxy (53.51s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.06s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-001000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-001000 --alsologtostderr -v=8: (37.062385084s)
functional_test.go:659: soft start took 37.062931041s for "functional-001000" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.06s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-001000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.8s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-001000 cache add registry.k8s.io/pause:3.1: (2.138453459s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-001000 cache add registry.k8s.io/pause:3.3: (2.066516416s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-001000 cache add registry.k8s.io/pause:latest: (1.598250292s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.80s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-001000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local572666783/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 cache add minikube-local-cache-test:functional-001000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 cache delete minikube-local-cache-test:functional-001000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-001000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-001000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (64.203334ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-arm64 -p functional-001000 cache reload: (1.054441333s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 kubectl -- --context functional-001000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.43s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.56s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-001000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.56s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.59s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-001000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-001000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.5912935s)
functional_test.go:757: restart took 31.591448458s for "functional-001000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (31.59s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-001000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.61s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd3312554489/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.61s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.62s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-001000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-001000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-001000: exit status 115 (149.468084ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30980 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-001000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.62s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-001000 config get cpus: exit status 14 (27.48625ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-001000 config get cpus: exit status 14 (27.363584ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-001000 --alsologtostderr -v=1]
E0719 16:28:40.460846    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt: no such file or directory
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-001000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2780: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.26s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-001000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-001000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (112.354167ms)

                                                
                                                
-- stdout --
	* [functional-001000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:28:38.242867    2767 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:28:38.242994    2767 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:28:38.242997    2767 out.go:309] Setting ErrFile to fd 2...
	I0719 16:28:38.243000    2767 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:28:38.243127    2767 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:28:38.244165    2767 out.go:303] Setting JSON to false
	I0719 16:28:38.259685    2767 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3489,"bootTime":1689805829,"procs":421,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:28:38.259768    2767 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:28:38.265182    2767 out.go:177] * [functional-001000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	I0719 16:28:38.272132    2767 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:28:38.272209    2767 notify.go:220] Checking for updates...
	I0719 16:28:38.279147    2767 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:28:38.282139    2767 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:28:38.285167    2767 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:28:38.288147    2767 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:28:38.291098    2767 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:28:38.294409    2767 config.go:182] Loaded profile config "functional-001000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:28:38.294647    2767 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:28:38.299145    2767 out.go:177] * Using the qemu2 driver based on existing profile
	I0719 16:28:38.306115    2767 start.go:298] selected driver: qemu2
	I0719 16:28:38.306120    2767 start.go:880] validating driver "qemu2" against &{Name:functional-001000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-0
01000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Moun
tString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:28:38.306188    2767 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:28:38.311105    2767 out.go:177] 
	W0719 16:28:38.315145    2767 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0719 16:28:38.319113    2767 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-001000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-001000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-001000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (109.818375ms)

                                                
                                                
-- stdout --
	* [functional-001000] minikube v1.31.0 sur Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0719 16:28:37.876376    2757 out.go:296] Setting OutFile to fd 1 ...
	I0719 16:28:37.876544    2757 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:28:37.876547    2757 out.go:309] Setting ErrFile to fd 2...
	I0719 16:28:37.876550    2757 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0719 16:28:37.876708    2757 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
	I0719 16:28:37.878048    2757 out.go:303] Setting JSON to false
	I0719 16:28:37.895361    2757 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3488,"bootTime":1689805829,"procs":421,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4.1","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0719 16:28:37.895453    2757 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I0719 16:28:37.900299    2757 out.go:177] * [functional-001000] minikube v1.31.0 sur Darwin 13.4.1 (arm64)
	I0719 16:28:37.907250    2757 out.go:177]   - MINIKUBE_LOCATION=15585
	I0719 16:28:37.911235    2757 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	I0719 16:28:37.907318    2757 notify.go:220] Checking for updates...
	I0719 16:28:37.915219    2757 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0719 16:28:37.918277    2757 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0719 16:28:37.921382    2757 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	I0719 16:28:37.924281    2757 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0719 16:28:37.927538    2757 config.go:182] Loaded profile config "functional-001000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0719 16:28:37.927804    2757 driver.go:373] Setting default libvirt URI to qemu:///system
	I0719 16:28:37.932246    2757 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0719 16:28:37.939222    2757 start.go:298] selected driver: qemu2
	I0719 16:28:37.939227    2757 start.go:880] validating driver "qemu2" against &{Name:functional-001000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.31.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.40@sha256:8cadf23777709e43eca447c47a45f5a4635615129267ce025193040ec92a1631 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-0
01000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Moun
tString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0}
	I0719 16:28:37.939303    2757 start.go:891] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0719 16:28:37.945115    2757 out.go:177] 
	W0719 16:28:37.949238    2757 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0719 16:28:37.953226    2757 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6f9a16fb-0f42-447c-a7fe-a415ee89cd5b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.036541s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-001000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-001000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-001000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-001000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d8e47fe8-73a2-4be1-b24b-44fce4793d92] Pending
helpers_test.go:344: "sp-pod" [d8e47fe8-73a2-4be1-b24b-44fce4793d92] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d8e47fe8-73a2-4be1-b24b-44fce4793d92] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.006905s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-001000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-001000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-001000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [43e3a2e3-adfe-4f31-b480-894fa13bb5b4] Pending
helpers_test.go:344: "sp-pod" [43e3a2e3-adfe-4f31-b480-894fa13bb5b4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [43e3a2e3-adfe-4f31-b480-894fa13bb5b4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.012490458s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-001000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh -n functional-001000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 cp functional-001000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd1169696094/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh -n functional-001000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1470/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh "sudo cat /etc/test/nested/copy/1470/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1470.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh "sudo cat /etc/ssl/certs/1470.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1470.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh "sudo cat /usr/share/ca-certificates/1470.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/14702.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh "sudo cat /etc/ssl/certs/14702.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/14702.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh "sudo cat /usr/share/ca-certificates/14702.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-001000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-001000 ssh "sudo systemctl is-active crio": exit status 1 (59.136ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-001000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.3
registry.k8s.io/kube-proxy:v1.27.3
registry.k8s.io/kube-controller-manager:v1.27.3
registry.k8s.io/kube-apiserver:v1.27.3
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-001000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-001000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-001000 image ls --format short --alsologtostderr:
I0719 16:28:47.944672    2786 out.go:296] Setting OutFile to fd 1 ...
I0719 16:28:47.945033    2786 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0719 16:28:47.945040    2786 out.go:309] Setting ErrFile to fd 2...
I0719 16:28:47.945043    2786 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0719 16:28:47.945151    2786 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
I0719 16:28:47.945543    2786 config.go:182] Loaded profile config "functional-001000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0719 16:28:47.945598    2786 config.go:182] Loaded profile config "functional-001000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0719 16:28:47.946437    2786 ssh_runner.go:195] Run: systemctl --version
I0719 16:28:47.946446    2786 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/functional-001000/id_rsa Username:docker}
I0719 16:28:47.973672    2786 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-001000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.27.3           | bcb9e554eaab6 | 56.2MB |
| registry.k8s.io/etcd                        | 3.5.7-0           | 24bc64e911039 | 181MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-001000 | 775d357093692 | 30B    |
| docker.io/library/nginx                     | alpine            | 66bf2c914bf4d | 41MB   |
| registry.k8s.io/kube-proxy                  | v1.27.3           | fb73e92641fd5 | 66.5MB |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/busybox                 | latest            | 71a676dd070f4 | 1.41MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/localhost/my-image                | functional-001000 | bc9a0af0264eb | 1.41MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-apiserver              | v1.27.3           | 39dfb036b0986 | 115MB  |
| registry.k8s.io/kube-controller-manager     | v1.27.3           | ab3683b584ae5 | 107MB  |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| gcr.io/google-containers/addon-resizer      | functional-001000 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/nginx                     | latest            | 2002d33a54f72 | 192MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-001000 image ls --format table --alsologtostderr:
I0719 16:28:50.998284    2798 out.go:296] Setting OutFile to fd 1 ...
I0719 16:28:50.998431    2798 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0719 16:28:50.998435    2798 out.go:309] Setting ErrFile to fd 2...
I0719 16:28:50.998437    2798 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0719 16:28:50.998547    2798 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
I0719 16:28:50.998914    2798 config.go:182] Loaded profile config "functional-001000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0719 16:28:50.998969    2798 config.go:182] Loaded profile config "functional-001000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0719 16:28:50.999730    2798 ssh_runner.go:195] Run: systemctl --version
I0719 16:28:50.999738    2798 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/functional-001000/id_rsa Username:docker}
I0719 16:28:51.026278    2798 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-001000 image ls --format json --alsologtostderr:
[{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-001000"],"size":"32900000"},{"id":"775d35709369292282da922f57652adef8bb1f68d506e330f3c93ed24fedd9e1","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-001000"],"size":"30"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size"
:"42300000"},{"id":"bc9a0af0264eb2ebea9840fba79cc2151fda78811e61d09f0cd0dae75682afef","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-001000"],"size":"1410000"},{"id":"66bf2c914bf4d0aac4b62f09f9f74ad35898d613024a0f2ec94dca9e79fac6ea","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"41000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"2002d33a54f72d1333751d4d1b4793a60a635eac6e94a98daf0acea501580c4f","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1410000"},{"id":"ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb
601acc2d0611b8","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.3"],"size":"107000000"},{"id":"fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.27.3"],"size":"66500000"},{"id":"24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"181000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473","repoDigests":[],"repoTags":["registry.k8s.io/kube-
apiserver:v1.27.3"],"size":"115000000"},{"id":"bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.3"],"size":"56200000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-001000 image ls --format json --alsologtostderr:
I0719 16:28:50.927234    2796 out.go:296] Setting OutFile to fd 1 ...
I0719 16:28:50.927374    2796 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0719 16:28:50.927378    2796 out.go:309] Setting ErrFile to fd 2...
I0719 16:28:50.927380    2796 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0719 16:28:50.927508    2796 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
I0719 16:28:50.927952    2796 config.go:182] Loaded profile config "functional-001000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0719 16:28:50.928010    2796 config.go:182] Loaded profile config "functional-001000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0719 16:28:50.928783    2796 ssh_runner.go:195] Run: systemctl --version
I0719 16:28:50.928795    2796 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/functional-001000/id_rsa Username:docker}
I0719 16:28:50.957178    2796 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-001000 image ls --format yaml --alsologtostderr:
- id: bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.3
size: "56200000"
- id: ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.3
size: "107000000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-001000
size: "32900000"
- id: 775d35709369292282da922f57652adef8bb1f68d506e330f3c93ed24fedd9e1
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-001000
size: "30"
- id: 2002d33a54f72d1333751d4d1b4793a60a635eac6e94a98daf0acea501580c4f
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 66bf2c914bf4d0aac4b62f09f9f74ad35898d613024a0f2ec94dca9e79fac6ea
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "41000000"
- id: 39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.3
size: "115000000"
- id: fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.27.3
size: "66500000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: 24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "181000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-001000 image ls --format yaml --alsologtostderr:
I0719 16:28:48.015528    2788 out.go:296] Setting OutFile to fd 1 ...
I0719 16:28:48.016814    2788 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0719 16:28:48.016818    2788 out.go:309] Setting ErrFile to fd 2...
I0719 16:28:48.016821    2788 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0719 16:28:48.016965    2788 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
I0719 16:28:48.017327    2788 config.go:182] Loaded profile config "functional-001000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0719 16:28:48.017383    2788 config.go:182] Loaded profile config "functional-001000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0719 16:28:48.018135    2788 ssh_runner.go:195] Run: systemctl --version
I0719 16:28:48.018144    2788 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/functional-001000/id_rsa Username:docker}
I0719 16:28:48.043477    2788 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-001000 ssh pgrep buildkitd: exit status 1 (57.399541ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 image build -t localhost/my-image:functional-001000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-001000 image build -t localhost/my-image:functional-001000 testdata/build --alsologtostderr: (2.708878541s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-001000 image build -t localhost/my-image:functional-001000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in ae3ab9a590ef
Removing intermediate container ae3ab9a590ef
---> 74d302ae7e42
Step 3/3 : ADD content.txt /
---> bc9a0af0264e
Successfully built bc9a0af0264e
Successfully tagged localhost/my-image:functional-001000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-001000 image build -t localhost/my-image:functional-001000 testdata/build --alsologtostderr:
I0719 16:28:48.148779    2792 out.go:296] Setting OutFile to fd 1 ...
I0719 16:28:48.148975    2792 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0719 16:28:48.148979    2792 out.go:309] Setting ErrFile to fd 2...
I0719 16:28:48.148982    2792 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0719 16:28:48.149093    2792 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/15585-1056/.minikube/bin
I0719 16:28:48.149506    2792 config.go:182] Loaded profile config "functional-001000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0719 16:28:48.149884    2792 config.go:182] Loaded profile config "functional-001000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0719 16:28:48.150700    2792 ssh_runner.go:195] Run: systemctl --version
I0719 16:28:48.150712    2792 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15585-1056/.minikube/machines/functional-001000/id_rsa Username:docker}
I0719 16:28:48.176684    2792 build_images.go:151] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.922813115.tar
I0719 16:28:48.176749    2792 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0719 16:28:48.180206    2792 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.922813115.tar
I0719 16:28:48.181674    2792 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.922813115.tar: stat -c "%s %y" /var/lib/minikube/build/build.922813115.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.922813115.tar': No such file or directory
I0719 16:28:48.181686    2792 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.922813115.tar --> /var/lib/minikube/build/build.922813115.tar (3072 bytes)
I0719 16:28:48.188879    2792 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.922813115
I0719 16:28:48.192328    2792 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.922813115 -xf /var/lib/minikube/build/build.922813115.tar
I0719 16:28:48.195714    2792 docker.go:339] Building image: /var/lib/minikube/build/build.922813115
I0719 16:28:48.195752    2792 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-001000 /var/lib/minikube/build/build.922813115
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0719 16:28:50.817197    2792 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-001000 /var/lib/minikube/build/build.922813115: (2.621479s)
I0719 16:28:50.817259    2792 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.922813115
I0719 16:28:50.820339    2792 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.922813115.tar
I0719 16:28:50.823089    2792 build_images.go:207] Built localhost/my-image:functional-001000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.922813115.tar
I0719 16:28:50.823104    2792 build_images.go:123] succeeded building to: functional-001000
I0719 16:28:50.823106    2792 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.636873708s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-001000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 image load --daemon gcr.io/google-containers/addon-resizer:functional-001000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-001000 image load --daemon gcr.io/google-containers/addon-resizer:functional-001000 --alsologtostderr: (2.096659959s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 image load --daemon gcr.io/google-containers/addon-resizer:functional-001000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-001000 image load --daemon gcr.io/google-containers/addon-resizer:functional-001000 --alsologtostderr: (1.473643084s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.541982s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-001000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 image load --daemon gcr.io/google-containers/addon-resizer:functional-001000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-001000 image load --daemon gcr.io/google-containers/addon-resizer:functional-001000 --alsologtostderr: (2.076445083s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 image save gcr.io/google-containers/addon-resizer:functional-001000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 image rm gcr.io/google-containers/addon-resizer:functional-001000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-001000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 image save --daemon gcr.io/google-containers/addon-resizer:functional-001000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-arm64 -p functional-001000 image save --daemon gcr.io/google-containers/addon-resizer:functional-001000 --alsologtostderr: (1.493725333s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-001000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-001000 docker-env) && out/minikube-darwin-arm64 status -p functional-001000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-001000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-001000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-001000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [539c55cc-6f32-477a-b073-0315a0418d85] Pending
helpers_test.go:344: "nginx-svc" [539c55cc-6f32-477a-b073-0315a0418d85] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [539c55cc-6f32-477a-b073-0315a0418d85] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.007524167s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-001000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.59.90 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-001000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-001000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1094514904/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1689809299989297000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1094514904/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1689809299989297000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1094514904/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1689809299989297000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1094514904/001/test-1689809299989297000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-001000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (65.155542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 19 23:28 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 19 23:28 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 19 23:28 test-1689809299989297000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh cat /mount-9p/test-1689809299989297000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-001000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [7e947658-75c7-457f-a624-a781176a4c4f] Pending
helpers_test.go:344: "busybox-mount" [7e947658-75c7-457f-a624-a781176a4c4f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [7e947658-75c7-457f-a624-a781176a4c4f] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [7e947658-75c7-457f-a624-a781176a4c4f] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.005434417s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-001000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-001000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1094514904/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-001000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port12850618/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-001000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (59.528042ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-001000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port12850618/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-001000 ssh "sudo umount -f /mount-9p": exit status 1 (60.062125ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-001000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-001000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port12850618/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-001000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3104703655/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-001000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3104703655/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-001000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3104703655/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-001000 ssh "findmnt -T" /mount1: exit status 80 (78.089708ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/15585-1056/.minikube/machines/functional-001000/monitor: connect: connection refused
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_mount_d03542333c48a66139d605a4efb2cb58f8086c74_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-001000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-001000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3104703655/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-001000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3104703655/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-001000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3104703655/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-001000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-001000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-7b684b55f9-dhknm" [8f2c0f0b-3cbc-44f3-8b63-461820c2b782] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-7b684b55f9-dhknm" [8f2c0f0b-3cbc-44f3-8b63-461820c2b782] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0719 16:28:30.530621    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt: no such file or directory
E0719 16:28:30.852725    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt: no such file or directory
E0719 16:28:31.494951    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt: no such file or directory
E0719 16:28:32.776511    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt: no such file or directory
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.013609s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 service list -o json
functional_test.go:1493: Took "285.042208ms" to run "out/minikube-darwin-arm64 -p functional-001000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.105.4:32523
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-arm64 -p functional-001000 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.105.4:32523
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1314: Took "110.4735ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1328: Took "34.737792ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1365: Took "111.078958ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1378: Took "32.770208ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.14s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.11s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-001000
--- PASS: TestFunctional/delete_addon-resizer_images (0.11s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-001000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-001000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (28.82s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-052000 --driver=qemu2 
E0719 16:29:11.184668    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-052000 --driver=qemu2 : (28.8190555s)
--- PASS: TestImageBuild/serial/Setup (28.82s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.06s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-052000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-052000: (2.058127292s)
--- PASS: TestImageBuild/serial/NormalBuild (2.06s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.11s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-052000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.11s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-052000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (71.55s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-arm64 start -p ingress-addon-legacy-442000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 
E0719 16:29:52.146168    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-arm64 start -p ingress-addon-legacy-442000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 : (1m11.549736875s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (71.55s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (15.87s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-442000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-442000 addons enable ingress --alsologtostderr -v=5: (15.866731417s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (15.87s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.23s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-442000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.23s)

                                                
                                    
x
+
TestJSONOutput/start/Command (46.29s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-630000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 start -p json-output-630000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : (46.288274375s)
--- PASS: TestJSONOutput/start/Command (46.29s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.32s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-630000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.32s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.22s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-630000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.22s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.08s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-630000 --output=json --user=testUser
E0719 16:32:55.619685    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/functional-001000/client.crt: no such file or directory
E0719 16:32:55.626136    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/functional-001000/client.crt: no such file or directory
E0719 16:32:55.638324    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/functional-001000/client.crt: no such file or directory
E0719 16:32:55.660425    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/functional-001000/client.crt: no such file or directory
E0719 16:32:55.701894    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/functional-001000/client.crt: no such file or directory
E0719 16:32:55.784051    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/functional-001000/client.crt: no such file or directory
E0719 16:32:55.946184    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/functional-001000/client.crt: no such file or directory
E0719 16:32:56.268359    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/functional-001000/client.crt: no such file or directory
E0719 16:32:56.908944    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/functional-001000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-630000 --output=json --user=testUser: (12.079986834s)
--- PASS: TestJSONOutput/stop/Command (12.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-786000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-786000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.776833ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f1b96fdb-205f-4473-90b6-6e7eb3d31ea3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-786000] minikube v1.31.0 on Darwin 13.4.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ae1647b7-cf5f-4eb4-800c-0d1607252a97","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15585"}}
	{"specversion":"1.0","id":"13e2b5af-ed58-4381-a313-e939284fdcb1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig"}}
	{"specversion":"1.0","id":"7fef8fd5-63e6-49d3-9fd1-e56d9fa5a04d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"bc9bba4b-a14b-4968-8708-5ea01ebae4b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"75b883b8-8c3b-44e8-af74-82450a370174","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube"}}
	{"specversion":"1.0","id":"424f41f6-eec3-4cd3-a800-7593311d56e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a0f33a64-6f5d-421f-ab3d-183a93bc70d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-786000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-786000
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestMinikubeProfile (61.48s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-574000 --driver=qemu2 
E0719 16:32:58.190330    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/functional-001000/client.crt: no such file or directory
E0719 16:33:00.752598    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/functional-001000/client.crt: no such file or directory
E0719 16:33:05.874708    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/functional-001000/client.crt: no such file or directory
E0719 16:33:16.116633    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/functional-001000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-574000 --driver=qemu2 : (30.081051125s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-576000 --driver=qemu2 
E0719 16:33:30.197486    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt: no such file or directory
E0719 16:33:36.598431    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/functional-001000/client.crt: no such file or directory
E0719 16:33:57.906610    1470 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15585-1056/.minikube/profiles/addons-101000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-576000 --driver=qemu2 : (30.572444084s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-574000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-576000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-576000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-576000
helpers_test.go:175: Cleaning up "first-574000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-574000
--- PASS: TestMinikubeProfile (61.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-815000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-815000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (92.51525ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-815000] minikube v1.31.0 on Darwin 13.4.1 (arm64)
	  - MINIKUBE_LOCATION=15585
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15585-1056/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15585-1056/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-815000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-815000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (44.368417ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-815000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-815000
--- PASS: TestNoKubernetes/serial/Stop (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-815000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-815000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (41.347916ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-815000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-870000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-870000 -n old-k8s-version-870000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-870000 -n old-k8s-version-870000: exit status 7 (28.697208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-870000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-512000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/no-preload/serial/Stop (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-512000 -n no-preload-512000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-512000 -n no-preload-512000: exit status 7 (28.465875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-512000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-279000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-279000 -n embed-certs-279000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-279000 -n embed-certs-279000: exit status 7 (28.676958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-279000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-111000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-111000 -n default-k8s-diff-port-111000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-111000 -n default-k8s-diff-port-111000: exit status 7 (28.721959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-111000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-135000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-135000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-135000 -n newest-cni-135000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-135000 -n newest-cni-135000: exit status 7 (28.390083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-135000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/255)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:420: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-318000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-318000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-318000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-318000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-318000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-318000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-318000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-318000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-318000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-318000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-318000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-318000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-318000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-318000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-318000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-318000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-318000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-318000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-318000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-318000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-318000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-318000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-318000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-318000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-318000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-318000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-318000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-318000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-318000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-318000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-318000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-318000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-318000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-318000"

                                                
                                                
----------------------- debugLogs end: cilium-318000 [took: 2.134817958s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-318000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-318000
--- SKIP: TestNetworkPlugins/group/cilium (2.37s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-019000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-019000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                    
Copied to clipboard