Test Report: QEMU_macOS 16718

                    
                      7444206e787665923ea4c40a89871c62d2d7496f:2023-06-15:29730
                    
                

Test fail (94/254)

Order failed test Duration
3 TestDownloadOnly/v1.16.0/json-events 29.03
7 TestDownloadOnly/v1.16.0/kubectl 0
20 TestOffline 10.12
24 TestAddons/parallel/Registry 720.86
25 TestAddons/parallel/Ingress 0.74
26 TestAddons/parallel/InspektorGadget 480.84
27 TestAddons/parallel/MetricsServer 720.81
30 TestAddons/parallel/CSI 671.15
32 TestAddons/parallel/CloudSpanner 832.32
33 TestAddons/serial 0
34 TestAddons/StoppedEnableDisable 0
35 TestCertOptions 10.36
36 TestCertExpiration 195.45
37 TestDockerFlags 10.37
38 TestForceSystemdFlag 10.83
39 TestForceSystemdEnv 10.34
83 TestFunctional/parallel/ServiceCmdConnect 34.5
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
150 TestImageBuild/serial/BuildWithBuildArg 1.1
159 TestIngressAddonLegacy/serial/ValidateIngressAddons 52.19
194 TestMountStart/serial/StartWithMountFirst 10.22
197 TestMultiNode/serial/FreshStart2Nodes 9.77
198 TestMultiNode/serial/DeployApp2Nodes 113.74
199 TestMultiNode/serial/PingHostFrom2Pods 0.08
200 TestMultiNode/serial/AddNode 0.07
201 TestMultiNode/serial/ProfileList 0.1
202 TestMultiNode/serial/CopyFile 0.06
203 TestMultiNode/serial/StopNode 0.13
204 TestMultiNode/serial/StartAfterStop 0.1
205 TestMultiNode/serial/RestartKeepsNodes 5.36
206 TestMultiNode/serial/DeleteNode 0.1
207 TestMultiNode/serial/StopMultiNode 0.14
208 TestMultiNode/serial/RestartMultiNode 5.24
209 TestMultiNode/serial/ValidateNameConflict 20.01
213 TestPreload 10.03
215 TestScheduledStopUnix 9.84
216 TestSkaffold 16.96
219 TestRunningBinaryUpgrade 158.49
221 TestKubernetesUpgrade 15.41
234 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.55
235 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 2.29
236 TestStoppedBinaryUpgrade/Setup 163.46
238 TestPause/serial/Start 9.84
248 TestNoKubernetes/serial/StartWithK8s 10.05
249 TestNoKubernetes/serial/StartWithStopK8s 5.47
250 TestNoKubernetes/serial/Start 5.48
254 TestNoKubernetes/serial/StartNoArgs 5.46
256 TestNetworkPlugins/group/auto/Start 9.9
257 TestNetworkPlugins/group/kindnet/Start 9.78
258 TestNetworkPlugins/group/calico/Start 9.74
259 TestNetworkPlugins/group/custom-flannel/Start 9.86
260 TestNetworkPlugins/group/false/Start 9.7
261 TestNetworkPlugins/group/enable-default-cni/Start 9.81
262 TestNetworkPlugins/group/flannel/Start 9.89
263 TestNetworkPlugins/group/bridge/Start 9.68
264 TestNetworkPlugins/group/kubenet/Start 9.7
266 TestStartStop/group/old-k8s-version/serial/FirstStart 9.83
267 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
268 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
271 TestStartStop/group/old-k8s-version/serial/SecondStart 5.21
272 TestStoppedBinaryUpgrade/Upgrade 2.75
273 TestStoppedBinaryUpgrade/MinikubeLogs 0.13
274 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
275 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
276 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
277 TestStartStop/group/old-k8s-version/serial/Pause 0.11
279 TestStartStop/group/embed-certs/serial/FirstStart 9.92
281 TestStartStop/group/no-preload/serial/FirstStart 11.61
282 TestStartStop/group/embed-certs/serial/DeployApp 0.1
283 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.22
286 TestStartStop/group/embed-certs/serial/SecondStart 6.96
287 TestStartStop/group/no-preload/serial/DeployApp 0.09
288 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
291 TestStartStop/group/no-preload/serial/SecondStart 5.19
292 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
293 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
294 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
295 TestStartStop/group/embed-certs/serial/Pause 0.1
296 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
297 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
298 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
300 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.08
301 TestStartStop/group/no-preload/serial/Pause 0.11
303 TestStartStop/group/newest-cni/serial/FirstStart 11.8
304 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.2
308 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.96
313 TestStartStop/group/newest-cni/serial/SecondStart 5.19
314 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
315 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.05
316 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
317 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
320 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
321 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.16.0/json-events (29.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-066000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-066000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 : exit status 40 (29.028196584s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8096a338-26e0-42a6-b374-9ddff88d557b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-066000] minikube v1.30.1 on Darwin 13.4 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c2b4b767-c870-4abb-add2-ea5032d48c8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16718"}}
	{"specversion":"1.0","id":"b7fa0cd5-3471-45cc-879f-f735d9bd8abb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig"}}
	{"specversion":"1.0","id":"ad003a95-1eda-475b-87a3-cd42fa28a043","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"d63412f4-b1f4-4f45-b388-e7ea4861486a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"234ed23d-2317-4c49-ab77-b26c0912afb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube"}}
	{"specversion":"1.0","id":"0ea5d3af-969a-4847-ae55-91e164cd7cdf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"48a65e0f-20cf-4564-aba9-591f4bbbc11b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6eb11118-caca-4a55-9745-6bbef5cceefa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"55214f22-2d6f-44e9-8882-e5c57f8ab10f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5096bad0-323e-4653-aaad-326b544841d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-066000 in cluster download-only-066000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b7b5c5d2-3222-40e9-bb93-119b62cbcefd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.16.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6e6522a8-10e6-4567-9f0a-0dd4447267c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/16718-868/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x103eeffc8 0x103eeffc8 0x103eeffc8 0x103eeffc8 0x103eeffc8 0x103eeffc8 0x103eeffc8] Decompressors:map[bz2:0x140006d28d0 gz:0x140006d28d8 tar:0x140006d2880 tar.bz2:0x140006d2890 tar.gz:0x140006d28a0 tar.xz:0x140006d28b0 tar.zst:0x140006d28c0 tbz2:0x140006d2890 tgz:0x140006d
28a0 txz:0x140006d28b0 tzst:0x140006d28c0 xz:0x140006d28e0 zip:0x140006d28f0 zst:0x140006d28e8] Getters:map[file:0x14000bfc740 http:0x14000a0ea00 https:0x14000a0ea50] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"076b589d-2ef0-4be2-ac49-3f17d7b6ae60","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 09:32:25.716356    1315 out.go:296] Setting OutFile to fd 1 ...
	I0615 09:32:25.716486    1315 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 09:32:25.716489    1315 out.go:309] Setting ErrFile to fd 2...
	I0615 09:32:25.716491    1315 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 09:32:25.716560    1315 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	W0615 09:32:25.716617    1315 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/16718-868/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16718-868/.minikube/config/config.json: no such file or directory
	I0615 09:32:25.717699    1315 out.go:303] Setting JSON to true
	I0615 09:32:25.735952    1315 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":116,"bootTime":1686846629,"procs":391,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 09:32:25.736013    1315 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 09:32:25.741681    1315 out.go:97] [download-only-066000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 09:32:25.745660    1315 out.go:169] MINIKUBE_LOCATION=16718
	W0615 09:32:25.741806    1315 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball: no such file or directory
	I0615 09:32:25.741844    1315 notify.go:220] Checking for updates...
	I0615 09:32:25.753574    1315 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 09:32:25.756697    1315 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 09:32:25.758055    1315 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 09:32:25.760662    1315 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	W0615 09:32:25.766711    1315 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0615 09:32:25.766945    1315 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 09:32:25.771640    1315 out.go:97] Using the qemu2 driver based on user configuration
	I0615 09:32:25.771661    1315 start.go:297] selected driver: qemu2
	I0615 09:32:25.771665    1315 start.go:884] validating driver "qemu2" against <nil>
	I0615 09:32:25.771749    1315 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0615 09:32:25.775643    1315 out.go:169] Automatically selected the socket_vmnet network
	I0615 09:32:25.781061    1315 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0615 09:32:25.781148    1315 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0615 09:32:25.781195    1315 cni.go:84] Creating CNI manager for ""
	I0615 09:32:25.781201    1315 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0615 09:32:25.781205    1315 start_flags.go:319] config:
	{Name:download-only-066000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-066000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 09:32:25.781383    1315 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 09:32:25.785665    1315 out.go:97] Downloading VM boot image ...
	I0615 09:32:25.785696    1315 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso
	I0615 09:32:41.433833    1315 out.go:97] Starting control plane node download-only-066000 in cluster download-only-066000
	I0615 09:32:41.433854    1315 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0615 09:32:41.529630    1315 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0615 09:32:41.529716    1315 cache.go:57] Caching tarball of preloaded images
	I0615 09:32:41.530569    1315 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0615 09:32:41.535866    1315 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0615 09:32:41.535876    1315 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0615 09:32:41.742996    1315 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0615 09:32:53.435434    1315 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0615 09:32:53.435569    1315 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0615 09:32:54.079581    1315 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0615 09:32:54.079759    1315 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/download-only-066000/config.json ...
	I0615 09:32:54.079784    1315 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/download-only-066000/config.json: {Name:mkeab36ea4760a4354a26ce4f059985f1309a7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:32:54.080013    1315 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0615 09:32:54.080189    1315 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0615 09:32:54.674458    1315 out.go:169] 
	W0615 09:32:54.679596    1315 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/16718-868/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x103eeffc8 0x103eeffc8 0x103eeffc8 0x103eeffc8 0x103eeffc8 0x103eeffc8 0x103eeffc8] Decompressors:map[bz2:0x140006d28d0 gz:0x140006d28d8 tar:0x140006d2880 tar.bz2:0x140006d2890 tar.gz:0x140006d28a0 tar.xz:0x140006d28b0 tar.zst:0x140006d28c0 tbz2:0x140006d2890 tgz:0x140006d28a0 txz:0x140006d28b0 tzst:0x140006d28c0 xz:0x140006d28e0 zip:0x140006d28f0 zst:0x140006d28e8] Getters:map[file:0x14000bfc740 http:0x14000a0ea00 https:0x14000a0ea50] Dir:false ProgressListener:
<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0615 09:32:54.679626    1315 out_reason.go:110] 
	W0615 09:32:54.686527    1315 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 09:32:54.690415    1315 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:71: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-066000" "--force" "--alsologtostderr" "--kubernetes-version=v1.16.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.16.0/json-events (29.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:160: expected the file for binary exist at "/Users/jenkins/minikube-integration/16718-868/.minikube/cache/darwin/arm64/v1.16.0/kubectl" but got error stat /Users/jenkins/minikube-integration/16718-868/.minikube/cache/darwin/arm64/v1.16.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.12s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-690000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-690000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.948591208s)

                                                
                                                
-- stdout --
	* [offline-docker-690000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node offline-docker-690000 in cluster offline-docker-690000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-690000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:25:52.212745    3867 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:25:52.212877    3867 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:25:52.212880    3867 out.go:309] Setting ErrFile to fd 2...
	I0615 10:25:52.212883    3867 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:25:52.212955    3867 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:25:52.213994    3867 out.go:303] Setting JSON to false
	I0615 10:25:52.230815    3867 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3323,"bootTime":1686846629,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:25:52.230877    3867 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:25:52.234746    3867 out.go:177] * [offline-docker-690000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:25:52.242724    3867 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:25:52.242778    3867 notify.go:220] Checking for updates...
	I0615 10:25:52.247853    3867 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:25:52.250701    3867 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:25:52.253897    3867 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:25:52.256761    3867 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:25:52.259804    3867 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:25:52.263007    3867 config.go:182] Loaded profile config "multinode-506000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:25:52.263054    3867 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:25:52.266746    3867 out.go:177] * Using the qemu2 driver based on user configuration
	I0615 10:25:52.273662    3867 start.go:297] selected driver: qemu2
	I0615 10:25:52.273667    3867 start.go:884] validating driver "qemu2" against <nil>
	I0615 10:25:52.273672    3867 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:25:52.275590    3867 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0615 10:25:52.278706    3867 out.go:177] * Automatically selected the socket_vmnet network
	I0615 10:25:52.281826    3867 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 10:25:52.281846    3867 cni.go:84] Creating CNI manager for ""
	I0615 10:25:52.281851    3867 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 10:25:52.281854    3867 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0615 10:25:52.281859    3867 start_flags.go:319] config:
	{Name:offline-docker-690000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:offline-docker-690000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:25:52.281968    3867 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:25:52.288528    3867 out.go:177] * Starting control plane node offline-docker-690000 in cluster offline-docker-690000
	I0615 10:25:52.292665    3867 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 10:25:52.292696    3867 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 10:25:52.292707    3867 cache.go:57] Caching tarball of preloaded images
	I0615 10:25:52.292780    3867 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 10:25:52.292787    3867 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 10:25:52.292841    3867 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/offline-docker-690000/config.json ...
	I0615 10:25:52.292854    3867 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/offline-docker-690000/config.json: {Name:mkb78bf69d2a4e69cefffef5c90cae453706cd1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:25:52.293040    3867 start.go:365] acquiring machines lock for offline-docker-690000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:25:52.293067    3867 start.go:369] acquired machines lock for "offline-docker-690000" in 21.708µs
	I0615 10:25:52.293076    3867 start.go:93] Provisioning new machine with config: &{Name:offline-docker-690000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.3 ClusterName:offline-docker-690000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:25:52.293102    3867 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:25:52.296711    3867 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0615 10:25:52.310779    3867 start.go:159] libmachine.API.Create for "offline-docker-690000" (driver="qemu2")
	I0615 10:25:52.310812    3867 client.go:168] LocalClient.Create starting
	I0615 10:25:52.310879    3867 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:25:52.310901    3867 main.go:141] libmachine: Decoding PEM data...
	I0615 10:25:52.310909    3867 main.go:141] libmachine: Parsing certificate...
	I0615 10:25:52.310951    3867 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:25:52.310965    3867 main.go:141] libmachine: Decoding PEM data...
	I0615 10:25:52.310973    3867 main.go:141] libmachine: Parsing certificate...
	I0615 10:25:52.311295    3867 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:25:52.423887    3867 main.go:141] libmachine: Creating SSH key...
	I0615 10:25:52.473752    3867 main.go:141] libmachine: Creating Disk image...
	I0615 10:25:52.473762    3867 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:25:52.473911    3867 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/offline-docker-690000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/offline-docker-690000/disk.qcow2
	I0615 10:25:52.482358    3867 main.go:141] libmachine: STDOUT: 
	I0615 10:25:52.482379    3867 main.go:141] libmachine: STDERR: 
	I0615 10:25:52.482453    3867 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/offline-docker-690000/disk.qcow2 +20000M
	I0615 10:25:52.490279    3867 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:25:52.490296    3867 main.go:141] libmachine: STDERR: 
	I0615 10:25:52.490316    3867 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/offline-docker-690000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/offline-docker-690000/disk.qcow2
	I0615 10:25:52.490323    3867 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:25:52.490360    3867 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/offline-docker-690000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/offline-docker-690000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/offline-docker-690000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:e4:b0:a4:d3:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/offline-docker-690000/disk.qcow2
	I0615 10:25:52.491974    3867 main.go:141] libmachine: STDOUT: 
	I0615 10:25:52.491989    3867 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:25:52.492009    3867 client.go:171] LocalClient.Create took 181.196542ms
	I0615 10:25:54.494038    3867 start.go:128] duration metric: createHost completed in 2.200964958s
	I0615 10:25:54.494068    3867 start.go:83] releasing machines lock for "offline-docker-690000", held for 2.201032083s
	W0615 10:25:54.494098    3867 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:25:54.502656    3867 out.go:177] * Deleting "offline-docker-690000" in qemu2 ...
	W0615 10:25:54.511711    3867 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:25:54.511717    3867 start.go:687] Will try again in 5 seconds ...
	I0615 10:25:59.513731    3867 start.go:365] acquiring machines lock for offline-docker-690000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:25:59.513840    3867 start.go:369] acquired machines lock for "offline-docker-690000" in 84.792µs
	I0615 10:25:59.513873    3867 start.go:93] Provisioning new machine with config: &{Name:offline-docker-690000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.3 ClusterName:offline-docker-690000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:25:59.513925    3867 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:25:59.528634    3867 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0615 10:25:59.542703    3867 start.go:159] libmachine.API.Create for "offline-docker-690000" (driver="qemu2")
	I0615 10:25:59.542720    3867 client.go:168] LocalClient.Create starting
	I0615 10:25:59.542783    3867 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:25:59.542821    3867 main.go:141] libmachine: Decoding PEM data...
	I0615 10:25:59.542829    3867 main.go:141] libmachine: Parsing certificate...
	I0615 10:25:59.542857    3867 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:25:59.542889    3867 main.go:141] libmachine: Decoding PEM data...
	I0615 10:25:59.542896    3867 main.go:141] libmachine: Parsing certificate...
	I0615 10:25:59.543178    3867 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:26:00.016315    3867 main.go:141] libmachine: Creating SSH key...
	I0615 10:26:00.073435    3867 main.go:141] libmachine: Creating Disk image...
	I0615 10:26:00.073444    3867 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:26:00.073593    3867 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/offline-docker-690000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/offline-docker-690000/disk.qcow2
	I0615 10:26:00.082226    3867 main.go:141] libmachine: STDOUT: 
	I0615 10:26:00.082238    3867 main.go:141] libmachine: STDERR: 
	I0615 10:26:00.082292    3867 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/offline-docker-690000/disk.qcow2 +20000M
	I0615 10:26:00.089634    3867 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:26:00.089661    3867 main.go:141] libmachine: STDERR: 
	I0615 10:26:00.089681    3867 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/offline-docker-690000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/offline-docker-690000/disk.qcow2
	I0615 10:26:00.089686    3867 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:26:00.089736    3867 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/offline-docker-690000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/offline-docker-690000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/offline-docker-690000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:84:68:6e:be:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/offline-docker-690000/disk.qcow2
	I0615 10:26:00.091318    3867 main.go:141] libmachine: STDOUT: 
	I0615 10:26:00.091331    3867 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:26:00.091346    3867 client.go:171] LocalClient.Create took 548.632333ms
	I0615 10:26:02.093533    3867 start.go:128] duration metric: createHost completed in 2.579588417s
	I0615 10:26:02.093616    3867 start.go:83] releasing machines lock for "offline-docker-690000", held for 2.579805792s
	W0615 10:26:02.093962    3867 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-690000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-690000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:26:02.102537    3867 out.go:177] 
	W0615 10:26:02.107832    3867 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:26:02.107876    3867 out.go:239] * 
	* 
	W0615 10:26:02.110562    3867 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 10:26:02.118562    3867 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-690000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:522: *** TestOffline FAILED at 2023-06-15 10:26:02.131988 -0700 PDT m=+3216.468970876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-690000 -n offline-docker-690000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-690000 -n offline-docker-690000: exit status 7 (68.777583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-690000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-690000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-690000
--- FAIL: TestOffline (10.12s)

                                                
                                    
x
+
TestAddons/parallel/Registry (720.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:304: failed waiting for registry replicacontroller to stabilize: timed out waiting for the condition
addons_test.go:306: registry stabilized in 6m0.001855s
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
addons_test.go:308: ***** TestAddons/parallel/Registry: pod "actual-registry=true" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-477000 -n addons-477000
addons_test.go:308: TestAddons/parallel/Registry: showing logs for failed pods as of 2023-06-15 09:52:00.65588 -0700 PDT m=+1174.939317418
addons_test.go:309: failed waiting for pod actual-registry: actual-registry=true within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-477000 -n addons-477000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-477000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-066000 | jenkins | v1.30.1 | 15 Jun 23 09:32 PDT |                     |
	|         | -p download-only-066000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-066000 | jenkins | v1.30.1 | 15 Jun 23 09:32 PDT |                     |
	|         | -p download-only-066000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT | 15 Jun 23 09:33 PDT |
	| delete  | -p download-only-066000        | download-only-066000 | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT | 15 Jun 23 09:33 PDT |
	| delete  | -p download-only-066000        | download-only-066000 | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT | 15 Jun 23 09:33 PDT |
	| start   | --download-only -p             | binary-mirror-062000 | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT |                     |
	|         | binary-mirror-062000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49310         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-062000        | binary-mirror-062000 | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT | 15 Jun 23 09:33 PDT |
	| start   | -p addons-477000               | addons-477000        | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT | 15 Jun 23 09:40 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-477000        | jenkins | v1.30.1 | 15 Jun 23 09:52 PDT |                     |
	|         | addons-477000                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/15 09:33:17
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0615 09:33:17.135905    1397 out.go:296] Setting OutFile to fd 1 ...
	I0615 09:33:17.136030    1397 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 09:33:17.136034    1397 out.go:309] Setting ErrFile to fd 2...
	I0615 09:33:17.136037    1397 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 09:33:17.136120    1397 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 09:33:17.137161    1397 out.go:303] Setting JSON to false
	I0615 09:33:17.152121    1397 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":168,"bootTime":1686846629,"procs":371,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 09:33:17.152202    1397 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 09:33:17.156891    1397 out.go:177] * [addons-477000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 09:33:17.159887    1397 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 09:33:17.163775    1397 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 09:33:17.159984    1397 notify.go:220] Checking for updates...
	I0615 09:33:17.171800    1397 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 09:33:17.174813    1397 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 09:33:17.177828    1397 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 09:33:17.180704    1397 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 09:33:17.183887    1397 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 09:33:17.187819    1397 out.go:177] * Using the qemu2 driver based on user configuration
	I0615 09:33:17.194783    1397 start.go:297] selected driver: qemu2
	I0615 09:33:17.194788    1397 start.go:884] validating driver "qemu2" against <nil>
	I0615 09:33:17.194794    1397 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 09:33:17.196752    1397 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0615 09:33:17.199793    1397 out.go:177] * Automatically selected the socket_vmnet network
	I0615 09:33:17.201307    1397 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 09:33:17.201331    1397 cni.go:84] Creating CNI manager for ""
	I0615 09:33:17.201335    1397 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 09:33:17.201341    1397 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0615 09:33:17.201349    1397 start_flags.go:319] config:
	{Name:addons-477000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-477000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 09:33:17.201434    1397 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 09:33:17.209829    1397 out.go:177] * Starting control plane node addons-477000 in cluster addons-477000
	I0615 09:33:17.213726    1397 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 09:33:17.213749    1397 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 09:33:17.213768    1397 cache.go:57] Caching tarball of preloaded images
	I0615 09:33:17.213824    1397 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 09:33:17.213830    1397 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 09:33:17.214051    1397 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/config.json ...
	I0615 09:33:17.214063    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/config.json: {Name:mkc1c34b82952aae697463d2d78c6ea098445790 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:17.214292    1397 start.go:365] acquiring machines lock for addons-477000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 09:33:17.214399    1397 start.go:369] acquired machines lock for "addons-477000" in 101.583µs
	I0615 09:33:17.214409    1397 start.go:93] Provisioning new machine with config: &{Name:addons-477000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.3 ClusterName:addons-477000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 09:33:17.214436    1397 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 09:33:17.221743    1397 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0615 09:33:17.568031    1397 start.go:159] libmachine.API.Create for "addons-477000" (driver="qemu2")
	I0615 09:33:17.568071    1397 client.go:168] LocalClient.Create starting
	I0615 09:33:17.568226    1397 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 09:33:17.626803    1397 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 09:33:17.737973    1397 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 09:33:17.902563    1397 main.go:141] libmachine: Creating SSH key...
	I0615 09:33:17.968617    1397 main.go:141] libmachine: Creating Disk image...
	I0615 09:33:17.968623    1397 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 09:33:17.969817    1397 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2
	I0615 09:33:18.004891    1397 main.go:141] libmachine: STDOUT: 
	I0615 09:33:18.004923    1397 main.go:141] libmachine: STDERR: 
	I0615 09:33:18.004982    1397 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2 +20000M
	I0615 09:33:18.012411    1397 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 09:33:18.012436    1397 main.go:141] libmachine: STDERR: 
	I0615 09:33:18.012455    1397 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2
	I0615 09:33:18.012466    1397 main.go:141] libmachine: Starting QEMU VM...
	I0615 09:33:18.012501    1397 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:25:cc:0f:2e:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2
	I0615 09:33:18.081537    1397 main.go:141] libmachine: STDOUT: 
	I0615 09:33:18.081558    1397 main.go:141] libmachine: STDERR: 
	I0615 09:33:18.081562    1397 main.go:141] libmachine: Attempt 0
	I0615 09:33:18.081577    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:20.083712    1397 main.go:141] libmachine: Attempt 1
	I0615 09:33:20.083961    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:22.086122    1397 main.go:141] libmachine: Attempt 2
	I0615 09:33:22.086166    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:24.088201    1397 main.go:141] libmachine: Attempt 3
	I0615 09:33:24.088224    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:26.090268    1397 main.go:141] libmachine: Attempt 4
	I0615 09:33:26.090325    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:28.092361    1397 main.go:141] libmachine: Attempt 5
	I0615 09:33:28.092379    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:30.094488    1397 main.go:141] libmachine: Attempt 6
	I0615 09:33:30.094575    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:30.094985    1397 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0615 09:33:30.095099    1397 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:25:cc:f:2e:6f ID:1,1a:25:cc:f:2e:6f Lease:0x648c8ed9}
	I0615 09:33:30.095124    1397 main.go:141] libmachine: Found match: 1a:25:cc:f:2e:6f
	I0615 09:33:30.095168    1397 main.go:141] libmachine: IP: 192.168.105.2
	I0615 09:33:30.095195    1397 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0615 09:33:32.115264    1397 machine.go:88] provisioning docker machine ...
	I0615 09:33:32.115338    1397 buildroot.go:166] provisioning hostname "addons-477000"
	I0615 09:33:32.116828    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.117588    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.117607    1397 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-477000 && echo "addons-477000" | sudo tee /etc/hostname
	I0615 09:33:32.199158    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-477000
	
	I0615 09:33:32.199283    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.199748    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.199763    1397 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-477000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-477000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-477000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0615 09:33:32.260846    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0615 09:33:32.260864    1397 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16718-868/.minikube CaCertPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16718-868/.minikube}
	I0615 09:33:32.260878    1397 buildroot.go:174] setting up certificates
	I0615 09:33:32.260906    1397 provision.go:83] configureAuth start
	I0615 09:33:32.260912    1397 provision.go:138] copyHostCerts
	I0615 09:33:32.261103    1397 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16718-868/.minikube/ca.pem (1078 bytes)
	I0615 09:33:32.261436    1397 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16718-868/.minikube/cert.pem (1123 bytes)
	I0615 09:33:32.262101    1397 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16718-868/.minikube/key.pem (1679 bytes)
	I0615 09:33:32.262442    1397 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16718-868/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca-key.pem org=jenkins.addons-477000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-477000]
	I0615 09:33:32.306279    1397 provision.go:172] copyRemoteCerts
	I0615 09:33:32.306343    1397 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0615 09:33:32.306360    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:33:32.335305    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0615 09:33:32.343471    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0615 09:33:32.351180    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0615 09:33:32.358490    1397 provision.go:86] duration metric: configureAuth took 97.576167ms
	I0615 09:33:32.358498    1397 buildroot.go:189] setting minikube options for container-runtime
	I0615 09:33:32.358950    1397 config.go:182] Loaded profile config "addons-477000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 09:33:32.358995    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.359216    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.359220    1397 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0615 09:33:32.410196    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0615 09:33:32.410204    1397 buildroot.go:70] root file system type: tmpfs
	I0615 09:33:32.410261    1397 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0615 09:33:32.410301    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.410550    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.410587    1397 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0615 09:33:32.468329    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0615 09:33:32.468380    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.468634    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.468643    1397 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0615 09:33:32.794674    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0615 09:33:32.794698    1397 machine.go:91] provisioned docker machine in 679.423792ms
	I0615 09:33:32.794704    1397 client.go:171] LocalClient.Create took 15.226996125s
	I0615 09:33:32.794723    1397 start.go:167] duration metric: libmachine.API.Create for "addons-477000" took 15.227064791s
	I0615 09:33:32.794726    1397 start.go:300] post-start starting for "addons-477000" (driver="qemu2")
	I0615 09:33:32.794731    1397 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0615 09:33:32.794812    1397 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0615 09:33:32.794822    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:33:32.823879    1397 ssh_runner.go:195] Run: cat /etc/os-release
	I0615 09:33:32.825122    1397 info.go:137] Remote host: Buildroot 2021.02.12
	I0615 09:33:32.825128    1397 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16718-868/.minikube/addons for local assets ...
	I0615 09:33:32.825196    1397 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16718-868/.minikube/files for local assets ...
	I0615 09:33:32.825222    1397 start.go:303] post-start completed in 30.494125ms
	I0615 09:33:32.825555    1397 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/config.json ...
	I0615 09:33:32.825707    1397 start.go:128] duration metric: createHost completed in 15.611646375s
	I0615 09:33:32.825734    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.825947    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.825951    1397 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0615 09:33:32.876753    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: 1686846813.320713543
	
	I0615 09:33:32.876760    1397 fix.go:206] guest clock: 1686846813.320713543
	I0615 09:33:32.876764    1397 fix.go:219] Guest: 2023-06-15 09:33:33.320713543 -0700 PDT Remote: 2023-06-15 09:33:32.825711 -0700 PDT m=+15.708594751 (delta=495.002543ms)
	I0615 09:33:32.876775    1397 fix.go:190] guest clock delta is within tolerance: 495.002543ms
	I0615 09:33:32.876778    1397 start.go:83] releasing machines lock for "addons-477000", held for 15.662753208s
	I0615 09:33:32.877060    1397 ssh_runner.go:195] Run: cat /version.json
	I0615 09:33:32.877067    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:33:32.877085    1397 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0615 09:33:32.877121    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:33:32.951587    1397 ssh_runner.go:195] Run: systemctl --version
	I0615 09:33:32.953983    1397 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0615 09:33:32.956008    1397 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0615 09:33:32.956040    1397 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0615 09:33:32.961754    1397 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0615 09:33:32.961761    1397 start.go:466] detecting cgroup driver to use...
	I0615 09:33:32.961877    1397 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0615 09:33:32.967359    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0615 09:33:32.970783    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0615 09:33:32.973872    1397 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0615 09:33:32.973908    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0615 09:33:32.976794    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0615 09:33:32.979871    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0615 09:33:32.983273    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0615 09:33:32.986847    1397 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0615 09:33:32.990009    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0615 09:33:32.992910    1397 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0615 09:33:32.995885    1397 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0615 09:33:32.999181    1397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 09:33:33.082046    1397 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0615 09:33:33.090305    1397 start.go:466] detecting cgroup driver to use...
	I0615 09:33:33.090367    1397 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0615 09:33:33.095444    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0615 09:33:33.099628    1397 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0615 09:33:33.106008    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0615 09:33:33.110583    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0615 09:33:33.115305    1397 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0615 09:33:33.157221    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0615 09:33:33.165685    1397 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0615 09:33:33.171700    1397 ssh_runner.go:195] Run: which cri-dockerd
	I0615 09:33:33.173347    1397 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0615 09:33:33.176671    1397 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0615 09:33:33.184036    1397 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0615 09:33:33.256172    1397 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0615 09:33:33.326477    1397 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0615 09:33:33.326492    1397 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0615 09:33:33.331797    1397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 09:33:33.394602    1397 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0615 09:33:34.551420    1397 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.1568305s)
	I0615 09:33:34.551480    1397 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0615 09:33:34.614918    1397 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0615 09:33:34.680379    1397 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0615 09:33:34.741670    1397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 09:33:34.802995    1397 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0615 09:33:34.810702    1397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 09:33:34.876193    1397 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0615 09:33:34.899281    1397 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0615 09:33:34.899375    1397 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0615 09:33:34.902005    1397 start.go:534] Will wait 60s for crictl version
	I0615 09:33:34.902039    1397 ssh_runner.go:195] Run: which crictl
	I0615 09:33:34.903665    1397 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0615 09:33:34.922827    1397 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0615 09:33:34.922910    1397 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0615 09:33:34.936535    1397 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0615 09:33:34.948006    1397 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.2 ...
	I0615 09:33:34.948101    1397 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0615 09:33:34.949468    1397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0615 09:33:34.953059    1397 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 09:33:34.953103    1397 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0615 09:33:34.958178    1397 docker.go:636] Got preloaded images: 
	I0615 09:33:34.958185    1397 docker.go:642] registry.k8s.io/kube-apiserver:v1.27.3 wasn't preloaded
	I0615 09:33:34.958223    1397 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0615 09:33:34.960919    1397 ssh_runner.go:195] Run: which lz4
	I0615 09:33:34.962156    1397 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0615 09:33:34.963566    1397 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0615 09:33:34.963580    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635922 bytes)
	I0615 09:33:36.258372    1397 docker.go:600] Took 1.296282 seconds to copy over tarball
	I0615 09:33:36.258440    1397 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0615 09:33:37.363016    1397 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.104582708s)
	I0615 09:33:37.363034    1397 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0615 09:33:37.379849    1397 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0615 09:33:37.383479    1397 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0615 09:33:37.388752    1397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 09:33:37.449408    1397 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0615 09:33:38.998025    1397 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.548624125s)
	I0615 09:33:38.998130    1397 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0615 09:33:39.004063    1397 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0615 09:33:39.004075    1397 cache_images.go:84] Images are preloaded, skipping loading
	I0615 09:33:39.004149    1397 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0615 09:33:39.011968    1397 cni.go:84] Creating CNI manager for ""
	I0615 09:33:39.011977    1397 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 09:33:39.012000    1397 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0615 09:33:39.012011    1397 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-477000 NodeName:addons-477000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0615 09:33:39.012111    1397 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-477000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0615 09:33:39.012156    1397 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-477000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:addons-477000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0615 09:33:39.012203    1397 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0615 09:33:39.015681    1397 binaries.go:44] Found k8s binaries, skipping transfer
	I0615 09:33:39.015718    1397 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0615 09:33:39.018849    1397 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0615 09:33:39.023920    1397 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0615 09:33:39.028733    1397 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0615 09:33:39.033571    1397 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0615 09:33:39.034818    1397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0615 09:33:39.038909    1397 certs.go:56] Setting up /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000 for IP: 192.168.105.2
	I0615 09:33:39.038918    1397 certs.go:190] acquiring lock for shared ca certs: {Name:mk9ee4d7ca68f2cc32c8609d33f6ce33c43a91d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.039073    1397 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key
	I0615 09:33:39.109209    1397 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt ...
	I0615 09:33:39.109214    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt: {Name:mka7538e8370ad0560f47e28d206b077e2dbbef8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.109425    1397 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key ...
	I0615 09:33:39.109428    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key: {Name:mkca6c7de675216938ac1a6663738af412e2d280 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.109532    1397 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key
	I0615 09:33:39.219574    1397 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.crt ...
	I0615 09:33:39.219577    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.crt: {Name:mk21a595039c96735254391e5270364a73e52306 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.219709    1397 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key ...
	I0615 09:33:39.219712    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key: {Name:mk96cab9f1987887c2b313cd365bdba518ec818b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.219826    1397 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.key
	I0615 09:33:39.219831    1397 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt with IP's: []
	I0615 09:33:39.435828    1397 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt ...
	I0615 09:33:39.435835    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt: {Name:mk0f2105a4c5fdba007e9c77c7945365dc3f96af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.436029    1397 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.key ...
	I0615 09:33:39.436031    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.key: {Name:mk65e491f4b4c1ee8d05045efb9265b2c697a551 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.436124    1397 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key.96055969
	I0615 09:33:39.436133    1397 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0615 09:33:39.510125    1397 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt.96055969 ...
	I0615 09:33:39.510129    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt.96055969: {Name:mk7c90d062166950585957cb3f0ce136594c9cd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.510277    1397 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key.96055969 ...
	I0615 09:33:39.510280    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key.96055969: {Name:mk4266445b8f2d5bc078d169ee24b8765955e2f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.510384    1397 certs.go:337] copying /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt
	I0615 09:33:39.510598    1397 certs.go:341] copying /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key
	I0615 09:33:39.510713    1397 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.key
	I0615 09:33:39.510723    1397 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.crt with IP's: []
	I0615 09:33:39.610633    1397 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.crt ...
	I0615 09:33:39.610637    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.crt: {Name:mk04e06c13fe3eccffb62f328096a02f5668baa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.610779    1397 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.key ...
	I0615 09:33:39.610783    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.key: {Name:mk21e3c8e84fcac9a2d9da5e0fa06b26ad1ee7cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.611042    1397 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca-key.pem (1679 bytes)
	I0615 09:33:39.611072    1397 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem (1078 bytes)
	I0615 09:33:39.611094    1397 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem (1123 bytes)
	I0615 09:33:39.611429    1397 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/key.pem (1679 bytes)
	I0615 09:33:39.611960    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0615 09:33:39.619546    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0615 09:33:39.626711    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0615 09:33:39.633501    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0615 09:33:39.640010    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0615 09:33:39.647112    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0615 09:33:39.654063    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0615 09:33:39.660582    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0615 09:33:39.667533    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0615 09:33:39.674545    1397 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0615 09:33:39.680339    1397 ssh_runner.go:195] Run: openssl version
	I0615 09:33:39.682379    1397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0615 09:33:39.685404    1397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0615 09:33:39.686832    1397 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 15 16:33 /usr/share/ca-certificates/minikubeCA.pem
	I0615 09:33:39.686855    1397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0615 09:33:39.688703    1397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0615 09:33:39.691957    1397 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0615 09:33:39.693381    1397 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0615 09:33:39.693418    1397 kubeadm.go:404] StartCluster: {Name:addons-477000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.3 ClusterName:addons-477000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 09:33:39.693485    1397 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0615 09:33:39.699291    1397 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0615 09:33:39.702928    1397 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0615 09:33:39.706168    1397 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0615 09:33:39.708902    1397 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0615 09:33:39.708925    1397 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0615 09:33:39.731022    1397 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0615 09:33:39.731051    1397 kubeadm.go:322] [preflight] Running pre-flight checks
	I0615 09:33:39.787198    1397 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0615 09:33:39.787252    1397 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0615 09:33:39.787291    1397 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0615 09:33:39.845524    1397 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0615 09:33:39.853720    1397 out.go:204]   - Generating certificates and keys ...
	I0615 09:33:39.853771    1397 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0615 09:33:39.853800    1397 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0615 09:33:40.047052    1397 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0615 09:33:40.281668    1397 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0615 09:33:40.373604    1397 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0615 09:33:40.496002    1397 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0615 09:33:40.752895    1397 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0615 09:33:40.752975    1397 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-477000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0615 09:33:40.889354    1397 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0615 09:33:40.889424    1397 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-477000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0615 09:33:40.967392    1397 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0615 09:33:41.132487    1397 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0615 09:33:41.175551    1397 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0615 09:33:41.175583    1397 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0615 09:33:41.275708    1397 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0615 09:33:41.313261    1397 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0615 09:33:41.394612    1397 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0615 09:33:41.488793    1397 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0615 09:33:41.495623    1397 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0615 09:33:41.495672    1397 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0615 09:33:41.495691    1397 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0615 09:33:41.565044    1397 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0615 09:33:41.570236    1397 out.go:204]   - Booting up control plane ...
	I0615 09:33:41.570302    1397 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0615 09:33:41.570344    1397 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0615 09:33:41.570389    1397 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0615 09:33:41.570430    1397 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0615 09:33:41.570514    1397 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0615 09:33:45.571765    1397 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.003295 seconds
	I0615 09:33:45.571857    1397 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0615 09:33:45.577408    1397 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0615 09:33:46.094757    1397 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0615 09:33:46.095006    1397 kubeadm.go:322] [mark-control-plane] Marking the node addons-477000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0615 09:33:46.613385    1397 kubeadm.go:322] [bootstrap-token] Using token: f4kg8y.q60xaa2tn5uwspbb
	I0615 09:33:46.619341    1397 out.go:204]   - Configuring RBAC rules ...
	I0615 09:33:46.619403    1397 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0615 09:33:46.620813    1397 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0615 09:33:46.624663    1397 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0615 09:33:46.625913    1397 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0615 09:33:46.627185    1397 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0615 09:33:46.628306    1397 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0615 09:33:46.632523    1397 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0615 09:33:46.806353    1397 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0615 09:33:47.022461    1397 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0615 09:33:47.022733    1397 kubeadm.go:322] 
	I0615 09:33:47.022774    1397 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0615 09:33:47.022780    1397 kubeadm.go:322] 
	I0615 09:33:47.022834    1397 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0615 09:33:47.022839    1397 kubeadm.go:322] 
	I0615 09:33:47.022851    1397 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0615 09:33:47.022879    1397 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0615 09:33:47.022912    1397 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0615 09:33:47.022916    1397 kubeadm.go:322] 
	I0615 09:33:47.022952    1397 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0615 09:33:47.022958    1397 kubeadm.go:322] 
	I0615 09:33:47.022992    1397 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0615 09:33:47.022995    1397 kubeadm.go:322] 
	I0615 09:33:47.023016    1397 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0615 09:33:47.023050    1397 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0615 09:33:47.023081    1397 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0615 09:33:47.023083    1397 kubeadm.go:322] 
	I0615 09:33:47.023121    1397 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0615 09:33:47.023158    1397 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0615 09:33:47.023161    1397 kubeadm.go:322] 
	I0615 09:33:47.023197    1397 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token f4kg8y.q60xaa2tn5uwspbb \
	I0615 09:33:47.023261    1397 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:13ec1e4f42825d0267b189c0f8b830eda54e60d63681b653b209256c58602b59 \
	I0615 09:33:47.023273    1397 kubeadm.go:322] 	--control-plane 
	I0615 09:33:47.023278    1397 kubeadm.go:322] 
	I0615 09:33:47.023320    1397 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0615 09:33:47.023326    1397 kubeadm.go:322] 
	I0615 09:33:47.023380    1397 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token f4kg8y.q60xaa2tn5uwspbb \
	I0615 09:33:47.023443    1397 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:13ec1e4f42825d0267b189c0f8b830eda54e60d63681b653b209256c58602b59 
	I0615 09:33:47.023525    1397 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0615 09:33:47.023586    1397 cni.go:84] Creating CNI manager for ""
	I0615 09:33:47.023594    1397 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 09:33:47.031274    1397 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0615 09:33:47.035321    1397 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0615 09:33:47.038799    1397 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0615 09:33:47.043709    1397 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0615 09:33:47.043747    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:47.043800    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=1814abc5e40384accb8747bfb7e33027343c9627 minikube.k8s.io/name=addons-477000 minikube.k8s.io/updated_at=2023_06_15T09_33_47_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:47.095849    1397 ops.go:34] apiserver oom_adj: -16
	I0615 09:33:47.095898    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:47.645147    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:48.145079    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:48.645093    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:49.145044    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:49.645148    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:50.145134    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:50.645328    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:51.145310    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:51.645116    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:52.144609    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:52.645278    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:53.145243    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:53.645239    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:54.145000    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:54.644744    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:55.145233    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:55.644949    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:56.145008    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:56.644938    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:57.143430    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:57.645224    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:58.144898    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:58.644909    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:59.144773    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:59.644338    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:34:00.144834    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:34:00.206574    1397 kubeadm.go:1081] duration metric: took 13.163176875s to wait for elevateKubeSystemPrivileges.
	I0615 09:34:00.206587    1397 kubeadm.go:406] StartCluster complete in 20.513668625s
	I0615 09:34:00.206614    1397 settings.go:142] acquiring lock: {Name:mk45a698fcd8dd8ae6984c9cf4ad4d183fdb5424 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:34:00.206769    1397 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 09:34:00.206961    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/kubeconfig: {Name:mkbe9cac04fb467055323f2e3d5db2c6ddc287ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:34:00.207185    1397 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0615 09:34:00.207249    1397 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0615 09:34:00.207317    1397 addons.go:66] Setting ingress=true in profile "addons-477000"
	I0615 09:34:00.207322    1397 addons.go:66] Setting ingress-dns=true in profile "addons-477000"
	I0615 09:34:00.207325    1397 addons.go:228] Setting addon ingress=true in "addons-477000"
	I0615 09:34:00.207327    1397 addons.go:228] Setting addon ingress-dns=true in "addons-477000"
	I0615 09:34:00.207355    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207358    1397 addons.go:66] Setting cloud-spanner=true in profile "addons-477000"
	I0615 09:34:00.207362    1397 addons.go:228] Setting addon cloud-spanner=true in "addons-477000"
	I0615 09:34:00.207371    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207402    1397 addons.go:66] Setting metrics-server=true in profile "addons-477000"
	I0615 09:34:00.207421    1397 addons.go:66] Setting registry=true in profile "addons-477000"
	I0615 09:34:00.207457    1397 addons.go:228] Setting addon registry=true in "addons-477000"
	I0615 09:34:00.207434    1397 addons.go:66] Setting inspektor-gadget=true in profile "addons-477000"
	I0615 09:34:00.207483    1397 addons.go:228] Setting addon inspektor-gadget=true in "addons-477000"
	I0615 09:34:00.207494    1397 addons.go:228] Setting addon metrics-server=true in "addons-477000"
	I0615 09:34:00.207502    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207355    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207433    1397 addons.go:66] Setting default-storageclass=true in profile "addons-477000"
	I0615 09:34:00.207531    1397 config.go:182] Loaded profile config "addons-477000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 09:34:00.207537    1397 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-477000"
	I0615 09:34:00.207575    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207475    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207436    1397 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-477000"
	I0615 09:34:00.207676    1397 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-477000"
	I0615 09:34:00.207687    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207317    1397 addons.go:66] Setting volumesnapshots=true in profile "addons-477000"
	I0615 09:34:00.207735    1397 addons.go:228] Setting addon volumesnapshots=true in "addons-477000"
	I0615 09:34:00.207746    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207438    1397 addons.go:66] Setting gcp-auth=true in profile "addons-477000"
	I0615 09:34:00.207776    1397 mustload.go:65] Loading cluster: addons-477000
	I0615 09:34:00.207433    1397 addons.go:66] Setting storage-provisioner=true in profile "addons-477000"
	I0615 09:34:00.208143    1397 addons.go:228] Setting addon storage-provisioner=true in "addons-477000"
	I0615 09:34:00.208157    1397 host.go:66] Checking if "addons-477000" exists ...
	W0615 09:34:00.208299    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208325    1397 addons.go:274] "addons-477000" is not running, setting registry=true and skipping enablement (err=<nil>)
	W0615 09:34:00.208331    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208338    1397 addons.go:274] "addons-477000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	I0615 09:34:00.208333    1397 addons.go:464] Verifying addon registry=true in "addons-477000"
	I0615 09:34:00.211629    1397 out.go:177] * Verifying registry addon...
	W0615 09:34:00.208373    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	I0615 09:34:00.207993    1397 config.go:182] Loaded profile config "addons-477000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	W0615 09:34:00.208397    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208133    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208437    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208601    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208662    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	I0615 09:34:00.215730    1397 addons.go:228] Setting addon default-storageclass=true in "addons-477000"
	W0615 09:34:00.218550    1397 addons.go:274] "addons-477000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	W0615 09:34:00.218559    1397 addons.go:274] "addons-477000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	W0615 09:34:00.218576    1397 addons.go:274] "addons-477000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	W0615 09:34:00.218576    1397 addons.go:274] "addons-477000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	W0615 09:34:00.218580    1397 addons.go:274] "addons-477000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	W0615 09:34:00.218593    1397 addons.go:274] "addons-477000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	I0615 09:34:00.218944    1397 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0615 09:34:00.219273    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.221576    1397 addons.go:464] Verifying addon metrics-server=true in "addons-477000"
	I0615 09:34:00.221583    1397 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0615 09:34:00.224623    1397 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0615 09:34:00.224631    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0615 09:34:00.224638    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:34:00.221629    1397 addons.go:464] Verifying addon ingress=true in "addons-477000"
	I0615 09:34:00.221636    1397 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-477000"
	I0615 09:34:00.221723    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.228543    1397 out.go:177] * Verifying ingress addon...
	I0615 09:34:00.238557    1397 out.go:177] * Verifying csi-hostpath-driver addon...
	I0615 09:34:00.229248    1397 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0615 09:34:00.235988    1397 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0615 09:34:00.241352    1397 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0615 09:34:00.242560    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0615 09:34:00.242594    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:34:00.242973    1397 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0615 09:34:00.245385    1397 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0615 09:34:00.251897    1397 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0615 09:34:00.275682    1397 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0615 09:34:00.278596    1397 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0615 09:34:00.278604    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0615 09:34:00.310787    1397 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0615 09:34:00.310799    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0615 09:34:00.340750    1397 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0615 09:34:00.340763    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0615 09:34:00.370147    1397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0615 09:34:00.374756    1397 addons.go:420] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0615 09:34:00.374766    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0615 09:34:00.391483    1397 addons.go:420] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0615 09:34:00.391493    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0615 09:34:00.396735    1397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0615 09:34:00.725129    1397 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-477000" context rescaled to 1 replicas
	I0615 09:34:00.725155    1397 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 09:34:00.731827    1397 out.go:177] * Verifying Kubernetes components...
	I0615 09:34:00.735986    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0615 09:34:01.130555    1397 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	W0615 09:34:01.273320    1397 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0615 09:34:01.273347    1397 retry.go:31] will retry after 358.412085ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0615 09:34:01.273795    1397 node_ready.go:35] waiting up to 6m0s for node "addons-477000" to be "Ready" ...
	I0615 09:34:01.275272    1397 node_ready.go:49] node "addons-477000" has status "Ready":"True"
	I0615 09:34:01.275281    1397 node_ready.go:38] duration metric: took 1.477792ms waiting for node "addons-477000" to be "Ready" ...
	I0615 09:34:01.275284    1397 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0615 09:34:01.279498    1397 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-c5j9m" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:01.633151    1397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0615 09:34:02.299497    1397 pod_ready.go:92] pod "coredns-5d78c9869d-c5j9m" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.299518    1397 pod_ready.go:81] duration metric: took 1.020034208s waiting for pod "coredns-5d78c9869d-c5j9m" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.299526    1397 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-mds5s" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.303736    1397 pod_ready.go:92] pod "coredns-5d78c9869d-mds5s" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.303743    1397 pod_ready.go:81] duration metric: took 4.212458ms waiting for pod "coredns-5d78c9869d-mds5s" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.303749    1397 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.307230    1397 pod_ready.go:92] pod "etcd-addons-477000" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.307237    1397 pod_ready.go:81] duration metric: took 3.484042ms waiting for pod "etcd-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.307243    1397 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.311004    1397 pod_ready.go:92] pod "kube-apiserver-addons-477000" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.311013    1397 pod_ready.go:81] duration metric: took 3.766916ms waiting for pod "kube-apiserver-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.311019    1397 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.481809    1397 pod_ready.go:92] pod "kube-controller-manager-addons-477000" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.481828    1397 pod_ready.go:81] duration metric: took 170.807958ms waiting for pod "kube-controller-manager-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.481838    1397 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8rgcs" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.883885    1397 pod_ready.go:92] pod "kube-proxy-8rgcs" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.883919    1397 pod_ready.go:81] duration metric: took 402.082375ms waiting for pod "kube-proxy-8rgcs" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.883933    1397 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:03.277736    1397 pod_ready.go:92] pod "kube-scheduler-addons-477000" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:03.277748    1397 pod_ready.go:81] duration metric: took 393.817875ms waiting for pod "kube-scheduler-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:03.277754    1397 pod_ready.go:38] duration metric: took 2.002511417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0615 09:34:03.277768    1397 api_server.go:52] waiting for apiserver process to appear ...
	I0615 09:34:03.277845    1397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0615 09:34:03.986804    1397 api_server.go:72] duration metric: took 3.261712416s to wait for apiserver process to appear ...
	I0615 09:34:03.986816    1397 api_server.go:88] waiting for apiserver healthz status ...
	I0615 09:34:03.986824    1397 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0615 09:34:03.986882    1397 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.353767917s)
	I0615 09:34:03.990093    1397 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0615 09:34:03.990734    1397 api_server.go:141] control plane version: v1.27.3
	I0615 09:34:03.990742    1397 api_server.go:131] duration metric: took 3.923291ms to wait for apiserver health ...
	I0615 09:34:03.990745    1397 system_pods.go:43] waiting for kube-system pods to appear ...
	I0615 09:34:03.993833    1397 system_pods.go:59] 9 kube-system pods found
	I0615 09:34:03.993840    1397 system_pods.go:61] "coredns-5d78c9869d-c5j9m" [029d925c-5975-4103-a809-e22de9cdf1be] Running
	I0615 09:34:03.993843    1397 system_pods.go:61] "coredns-5d78c9869d-mds5s" [6851e0cd-a4cb-4c11-8b37-6e07dedcb90e] Running
	I0615 09:34:03.993845    1397 system_pods.go:61] "etcd-addons-477000" [41610910-d77e-4a7d-8f06-bcf205880f06] Running
	I0615 09:34:03.993848    1397 system_pods.go:61] "kube-apiserver-addons-477000" [49f23173-0ac0-4711-9c55-a96c00aa0881] Running
	I0615 09:34:03.993851    1397 system_pods.go:61] "kube-controller-manager-addons-477000" [befaa390-5c21-4b4a-930f-d4c4c3559d3d] Running
	I0615 09:34:03.993853    1397 system_pods.go:61] "kube-proxy-8rgcs" [6b460d21-5a79-40c6-84a1-e0551f8e91b9] Running
	I0615 09:34:03.993855    1397 system_pods.go:61] "kube-scheduler-addons-477000" [995baab0-4f21-4690-baf3-4c4682320ca6] Running
	I0615 09:34:03.993859    1397 system_pods.go:61] "snapshot-controller-75bbb956b9-p6hk4" [f6c07a51-e0a4-4ed6-a2fb-f927be1aa9f7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0615 09:34:03.993864    1397 system_pods.go:61] "snapshot-controller-75bbb956b9-prqv8" [368ee712-d2ca-4c1b-9d87-bffa3c354d7c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0615 09:34:03.993866    1397 system_pods.go:74] duration metric: took 3.119166ms to wait for pod list to return data ...
	I0615 09:34:03.993869    1397 default_sa.go:34] waiting for default service account to be created ...
	I0615 09:34:03.995049    1397 default_sa.go:45] found service account: "default"
	I0615 09:34:03.995055    1397 default_sa.go:55] duration metric: took 1.183708ms for default service account to be created ...
	I0615 09:34:03.995057    1397 system_pods.go:116] waiting for k8s-apps to be running ...
	I0615 09:34:03.998400    1397 system_pods.go:86] 9 kube-system pods found
	I0615 09:34:03.998409    1397 system_pods.go:89] "coredns-5d78c9869d-c5j9m" [029d925c-5975-4103-a809-e22de9cdf1be] Running
	I0615 09:34:03.998411    1397 system_pods.go:89] "coredns-5d78c9869d-mds5s" [6851e0cd-a4cb-4c11-8b37-6e07dedcb90e] Running
	I0615 09:34:03.998414    1397 system_pods.go:89] "etcd-addons-477000" [41610910-d77e-4a7d-8f06-bcf205880f06] Running
	I0615 09:34:03.998416    1397 system_pods.go:89] "kube-apiserver-addons-477000" [49f23173-0ac0-4711-9c55-a96c00aa0881] Running
	I0615 09:34:03.998419    1397 system_pods.go:89] "kube-controller-manager-addons-477000" [befaa390-5c21-4b4a-930f-d4c4c3559d3d] Running
	I0615 09:34:03.998421    1397 system_pods.go:89] "kube-proxy-8rgcs" [6b460d21-5a79-40c6-84a1-e0551f8e91b9] Running
	I0615 09:34:03.998424    1397 system_pods.go:89] "kube-scheduler-addons-477000" [995baab0-4f21-4690-baf3-4c4682320ca6] Running
	I0615 09:34:03.998429    1397 system_pods.go:89] "snapshot-controller-75bbb956b9-p6hk4" [f6c07a51-e0a4-4ed6-a2fb-f927be1aa9f7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0615 09:34:03.998433    1397 system_pods.go:89] "snapshot-controller-75bbb956b9-prqv8" [368ee712-d2ca-4c1b-9d87-bffa3c354d7c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0615 09:34:03.998436    1397 system_pods.go:126] duration metric: took 3.376208ms to wait for k8s-apps to be running ...
	I0615 09:34:03.998439    1397 system_svc.go:44] waiting for kubelet service to be running ....
	I0615 09:34:03.998489    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0615 09:34:04.003913    1397 system_svc.go:56] duration metric: took 5.471458ms WaitForService to wait for kubelet.
	I0615 09:34:04.003921    1397 kubeadm.go:581] duration metric: took 3.278833625s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0615 09:34:04.003932    1397 node_conditions.go:102] verifying NodePressure condition ...
	I0615 09:34:04.077208    1397 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0615 09:34:04.077239    1397 node_conditions.go:123] node cpu capacity is 2
	I0615 09:34:04.077244    1397 node_conditions.go:105] duration metric: took 73.311333ms to run NodePressure ...
	I0615 09:34:04.077249    1397 start.go:228] waiting for startup goroutines ...
	I0615 09:34:06.831960    1397 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0615 09:34:06.832053    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:34:06.882622    1397 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0615 09:34:06.891297    1397 addons.go:228] Setting addon gcp-auth=true in "addons-477000"
	I0615 09:34:06.891339    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:06.892599    1397 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0615 09:34:06.892612    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:34:06.928262    1397 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0615 09:34:06.932997    1397 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0615 09:34:06.937187    1397 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0615 09:34:06.937194    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0615 09:34:06.943495    1397 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0615 09:34:06.943502    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0615 09:34:06.949337    1397 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0615 09:34:06.949343    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0615 09:34:06.954968    1397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0615 09:34:07.475178    1397 addons.go:464] Verifying addon gcp-auth=true in "addons-477000"
	I0615 09:34:07.478304    1397 out.go:177] * Verifying gcp-auth addon...
	I0615 09:34:07.485666    1397 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0615 09:34:07.491991    1397 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0615 09:34:07.492002    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:07.996710    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:08.496921    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:09.002133    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:09.494606    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:09.995080    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:10.495704    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:10.995530    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:11.495877    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:12.001470    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:12.497446    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:13.001473    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:13.502268    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:13.997362    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:14.503184    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:14.997798    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:15.495991    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:16.000278    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:16.501895    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:17.001719    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:17.495416    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:17.995757    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:18.496835    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:19.002478    1397 kapi.go:107] duration metric: took 11.51706925s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0615 09:34:19.008171    1397 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-477000 cluster.
	I0615 09:34:19.011889    1397 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0615 09:34:19.016120    1397 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0615 09:40:00.215049    1397 kapi.go:107] duration metric: took 6m0.00478975s to wait for kubernetes.io/minikube-addons=registry ...
	W0615 09:40:00.215455    1397 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0615 09:40:00.235888    1397 kapi.go:107] duration metric: took 6m0.001641708s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0615 09:40:00.235937    1397 kapi.go:107] duration metric: took 6m0.008679083s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0615 09:40:00.236028    1397 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	W0615 09:40:00.236088    1397 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0615 09:40:00.243957    1397 out.go:177] * Enabled addons: inspektor-gadget, metrics-server, cloud-spanner, ingress-dns, storage-provisioner, default-storageclass, volumesnapshots, gcp-auth
	I0615 09:40:00.250970    1397 addons.go:499] enable addons completed in 6m0.052449292s: enabled=[inspektor-gadget metrics-server cloud-spanner ingress-dns storage-provisioner default-storageclass volumesnapshots gcp-auth]
	I0615 09:40:00.251042    1397 start.go:233] waiting for cluster config update ...
	I0615 09:40:00.251069    1397 start.go:242] writing updated cluster config ...
	I0615 09:40:00.255738    1397 ssh_runner.go:195] Run: rm -f paused
	I0615 09:40:00.403218    1397 start.go:582] kubectl: 1.25.9, cluster: 1.27.3 (minor skew: 2)
	I0615 09:40:00.405982    1397 out.go:177] 
	W0615 09:40:00.410033    1397 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.3.
	I0615 09:40:00.413868    1397 out.go:177]   - Want kubectl v1.27.3? Try 'minikube kubectl -- get pods -A'
	I0615 09:40:00.421952    1397 out.go:177] * Done! kubectl is now configured to use "addons-477000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-06-15 16:33:29 UTC, ends at Thu 2023-06-15 16:52:00 UTC. --
	Jun 15 16:34:12 addons-477000 dockerd[1097]: time="2023-06-15T16:34:12.712778061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 15 16:34:12 addons-477000 dockerd[1097]: time="2023-06-15T16:34:12.712798672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 16:34:12 addons-477000 dockerd[1091]: time="2023-06-15T16:34:12.754255249Z" level=info msg="ignoring event" container=784517c6a1ba995d5a0ac3e06a0c3b3112e1ab80318bad66824ce915bcadf04c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 15 16:34:12 addons-477000 dockerd[1097]: time="2023-06-15T16:34:12.754320625Z" level=info msg="shim disconnected" id=784517c6a1ba995d5a0ac3e06a0c3b3112e1ab80318bad66824ce915bcadf04c namespace=moby
	Jun 15 16:34:12 addons-477000 dockerd[1097]: time="2023-06-15T16:34:12.754344286Z" level=warning msg="cleaning up after shim disconnected" id=784517c6a1ba995d5a0ac3e06a0c3b3112e1ab80318bad66824ce915bcadf04c namespace=moby
	Jun 15 16:34:12 addons-477000 dockerd[1097]: time="2023-06-15T16:34:12.754349480Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 15 16:34:13 addons-477000 dockerd[1097]: time="2023-06-15T16:34:13.793607046Z" level=info msg="shim disconnected" id=84f78f0eaf8dda7e1caea638bf41a2296e9ad897543494ce93c4491f7a6bcb47 namespace=moby
	Jun 15 16:34:13 addons-477000 dockerd[1091]: time="2023-06-15T16:34:13.793691952Z" level=info msg="ignoring event" container=84f78f0eaf8dda7e1caea638bf41a2296e9ad897543494ce93c4491f7a6bcb47 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 15 16:34:13 addons-477000 dockerd[1097]: time="2023-06-15T16:34:13.794164777Z" level=warning msg="cleaning up after shim disconnected" id=84f78f0eaf8dda7e1caea638bf41a2296e9ad897543494ce93c4491f7a6bcb47 namespace=moby
	Jun 15 16:34:13 addons-477000 dockerd[1097]: time="2023-06-15T16:34:13.794176995Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 15 16:34:14 addons-477000 dockerd[1091]: time="2023-06-15T16:34:14.817608715Z" level=info msg="ignoring event" container=08da339f6e51c0a36beb905cf51aaa6ebe12eed98abda59168295fcb62adf9d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 15 16:34:14 addons-477000 dockerd[1097]: time="2023-06-15T16:34:14.817442834Z" level=info msg="shim disconnected" id=08da339f6e51c0a36beb905cf51aaa6ebe12eed98abda59168295fcb62adf9d0 namespace=moby
	Jun 15 16:34:14 addons-477000 dockerd[1097]: time="2023-06-15T16:34:14.819525140Z" level=warning msg="cleaning up after shim disconnected" id=08da339f6e51c0a36beb905cf51aaa6ebe12eed98abda59168295fcb62adf9d0 namespace=moby
	Jun 15 16:34:14 addons-477000 dockerd[1097]: time="2023-06-15T16:34:14.819574276Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 15 16:34:15 addons-477000 dockerd[1097]: time="2023-06-15T16:34:15.579995441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 15 16:34:15 addons-477000 dockerd[1097]: time="2023-06-15T16:34:15.580279599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 16:34:15 addons-477000 dockerd[1097]: time="2023-06-15T16:34:15.580317285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 15 16:34:15 addons-477000 dockerd[1097]: time="2023-06-15T16:34:15.580357122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 16:34:15 addons-477000 cri-dockerd[991]: time="2023-06-15T16:34:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0c316c00ee755585c1753e0f1d6364e1731871da5d072484c67c43cac67cd349/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 15 16:34:15 addons-477000 dockerd[1091]: time="2023-06-15T16:34:15.926803390Z" level=warning msg="reference for unknown type: " digest="sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Jun 15 16:34:18 addons-477000 cri-dockerd[991]: time="2023-06-15T16:34:18Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Jun 15 16:34:18 addons-477000 dockerd[1097]: time="2023-06-15T16:34:18.269480431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 15 16:34:18 addons-477000 dockerd[1097]: time="2023-06-15T16:34:18.269881621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 16:34:18 addons-477000 dockerd[1097]: time="2023-06-15T16:34:18.269894061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 15 16:34:18 addons-477000 dockerd[1097]: time="2023-06-15T16:34:18.269898788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                     CREATED             STATE               NAME                         ATTEMPT             POD ID
	6a4bcd8ac64ff       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf              17 minutes ago      Running             gcp-auth                     0                   0c316c00ee755
	8527d6f42bef1       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   17 minutes ago      Running             volume-snapshot-controller   0                   f6bd41ad4abf6
	06a9dab9c48b6       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   17 minutes ago      Running             volume-snapshot-controller   0                   629404aaee996
	256eaaad3894a       97e04611ad434                                                                                                             17 minutes ago      Running             coredns                      0                   f6fc2a0d05c4a
	29b72a92c6578       fb73e92641fd5                                                                                                             18 minutes ago      Running             kube-proxy                   0                   405ca9198a355
	733213e41e3b9       bcb9e554eaab6                                                                                                             18 minutes ago      Running             kube-scheduler               0                   25817e506c78b
	b11fb0f325644       39dfb036b0986                                                                                                             18 minutes ago      Running             kube-apiserver               0                   0dde73a500899
	66de98cb24ea0       ab3683b584ae5                                                                                                             18 minutes ago      Running             kube-controller-manager      0                   69ef168f52131
	41a6909f99a59       24bc64e911039                                                                                                             18 minutes ago      Running             etcd                         0                   9b969e901cc05
	
	* 
	* ==> coredns [256eaaad3894] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:55502 - 31535 "HINFO IN 8156761713541019547.3807690688336836625. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.006087175s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-477000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-477000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1814abc5e40384accb8747bfb7e33027343c9627
	                    minikube.k8s.io/name=addons-477000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_15T09_33_47_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Jun 2023 16:33:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-477000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Jun 2023 16:52:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Jun 2023 16:50:10 +0000   Thu, 15 Jun 2023 16:33:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Jun 2023 16:50:10 +0000   Thu, 15 Jun 2023 16:33:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Jun 2023 16:50:10 +0000   Thu, 15 Jun 2023 16:33:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Jun 2023 16:50:10 +0000   Thu, 15 Jun 2023 16:33:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-477000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 5009a87f17804889a5a4616073b937e0
	  System UUID:                5009a87f17804889a5a4616073b937e0
	  Boot ID:                    9630f686-3c90-436f-98e6-d8c6686f510a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-58478865f7-2pgxv                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 coredns-5d78c9869d-mds5s                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     18m
	  kube-system                 etcd-addons-477000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kube-apiserver-addons-477000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-addons-477000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-8rgcs                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-addons-477000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 snapshot-controller-75bbb956b9-p6hk4     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 snapshot-controller-75bbb956b9-prqv8     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node addons-477000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node addons-477000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node addons-477000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m                kubelet          Node addons-477000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m                kubelet          Node addons-477000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m                kubelet          Node addons-477000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                18m                kubelet          Node addons-477000 status is now: NodeReady
	  Normal  RegisteredNode           18m                node-controller  Node addons-477000 event: Registered Node addons-477000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] KASLR disabled due to lack of seed
	[  +0.687452] EINJ: EINJ table not found.
	[  +0.627011] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043411] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000812] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.868427] systemd-fstab-generator[480]: Ignoring "noauto" for root device
	[  +0.067044] systemd-fstab-generator[491]: Ignoring "noauto" for root device
	[  +0.422105] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.174556] systemd-fstab-generator[729]: Ignoring "noauto" for root device
	[  +0.069729] systemd-fstab-generator[740]: Ignoring "noauto" for root device
	[  +0.066761] systemd-fstab-generator[753]: Ignoring "noauto" for root device
	[  +1.220689] systemd-fstab-generator[910]: Ignoring "noauto" for root device
	[  +0.067164] systemd-fstab-generator[921]: Ignoring "noauto" for root device
	[  +0.058616] systemd-fstab-generator[932]: Ignoring "noauto" for root device
	[  +0.062347] systemd-fstab-generator[943]: Ignoring "noauto" for root device
	[  +0.069889] systemd-fstab-generator[971]: Ignoring "noauto" for root device
	[  +2.576383] systemd-fstab-generator[1084]: Ignoring "noauto" for root device
	[  +1.530737] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.580821] systemd-fstab-generator[1404]: Ignoring "noauto" for root device
	[  +5.139726] systemd-fstab-generator[2250]: Ignoring "noauto" for root device
	[Jun15 16:34] kauditd_printk_skb: 41 callbacks suppressed
	[  +2.392909] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +4.125194] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.280200] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.114632] kauditd_printk_skb: 3 callbacks suppressed
	
	* 
	* ==> etcd [41a6909f99a5] <==
	* {"level":"info","ts":"2023-06-15T16:33:43.168Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.2:2380"}
	{"level":"info","ts":"2023-06-15T16:33:43.362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2023-06-15T16:33:43.362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-06-15T16:33:43.362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2023-06-15T16:33:43.362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2023-06-15T16:33:43.362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-06-15T16:33:43.362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2023-06-15T16:33:43.362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-06-15T16:33:43.362Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-477000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-15T16:33:43.362Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-15T16:33:43.362Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-15T16:33:43.373Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-15T16:33:43.373Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-06-15T16:43:43.957Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":747}
	{"level":"info","ts":"2023-06-15T16:43:43.961Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":747,"took":"2.441695ms","hash":524925281}
	{"level":"info","ts":"2023-06-15T16:43:43.961Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":524925281,"revision":747,"compact-revision":-1}
	{"level":"info","ts":"2023-06-15T16:48:43.971Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":897}
	{"level":"info","ts":"2023-06-15T16:48:43.973Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":897,"took":"1.284283ms","hash":2514030906}
	{"level":"info","ts":"2023-06-15T16:48:43.973Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2514030906,"revision":897,"compact-revision":747}
	
	* 
	* ==> gcp-auth [6a4bcd8ac64f] <==
	* 2023/06/15 16:34:18 GCP Auth Webhook started!
	
	* 
	* ==> kernel <==
	*  16:52:01 up 18 min,  0 users,  load average: 0.71, 0.65, 0.44
	Linux addons-477000 5.10.57 #1 SMP PREEMPT Wed Jun 14 05:08:37 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [b11fb0f32564] <==
	* I0615 16:34:01.646985       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:34:01.652982       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:34:01.653098       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:34:01.682045       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:34:01.682170       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:34:01.699943       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:34:01.700307       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:34:01.707459       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:34:01.707488       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:34:07.699142       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs=map[IPv4:10.109.79.243]
	I0615 16:34:07.712516       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0615 16:38:44.748912       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:38:44.749316       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:38:44.757965       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:38:44.758323       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:43:44.761094       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:43:44.761179       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:43:44.769254       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:43:44.769477       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:43:44.769594       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:43:44.769676       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:48:44.750393       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:48:44.750930       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:48:44.765734       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:48:44.766097       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [66de98cb24ea] <==
	* I0615 16:34:13.731338       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:13.817171       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:14.755476       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:14.766120       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:14.820829       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:14.823621       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:14.825690       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0615 16:34:14.825754       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:14.826668       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:14.850370       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:15.758931       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:15.761497       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:15.764164       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0615 16:34:15.764226       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:15.766220       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:15.768259       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:29.767346       1 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0615 16:34:29.767460       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0615 16:34:29.868459       1 shared_informer.go:318] Caches are synced for resource quota
	I0615 16:34:30.190420       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0615 16:34:30.296099       1 shared_informer.go:318] Caches are synced for garbage collector
	I0615 16:34:44.034182       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:44.057184       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:45.016712       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:45.039501       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	
	* 
	* ==> kube-proxy [29b72a92c657] <==
	* I0615 16:34:01.157223       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0615 16:34:01.157274       1 server_others.go:110] "Detected node IP" address="192.168.105.2"
	I0615 16:34:01.157290       1 server_others.go:554] "Using iptables proxy"
	I0615 16:34:01.207136       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0615 16:34:01.207158       1 server_others.go:192] "Using iptables Proxier"
	I0615 16:34:01.207188       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0615 16:34:01.207493       1 server.go:658] "Version info" version="v1.27.3"
	I0615 16:34:01.207499       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0615 16:34:01.208029       1 config.go:188] "Starting service config controller"
	I0615 16:34:01.208049       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0615 16:34:01.208060       1 config.go:97] "Starting endpoint slice config controller"
	I0615 16:34:01.208062       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0615 16:34:01.209533       1 config.go:315] "Starting node config controller"
	I0615 16:34:01.209537       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0615 16:34:01.308743       1 shared_informer.go:318] Caches are synced for service config
	I0615 16:34:01.308782       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0615 16:34:01.309993       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [733213e41e3b] <==
	* W0615 16:33:44.753904       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0615 16:33:44.754011       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0615 16:33:44.754034       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0615 16:33:44.754072       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0615 16:33:44.754021       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0615 16:33:44.754081       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0615 16:33:44.754001       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0615 16:33:44.754100       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0615 16:33:44.754136       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0615 16:33:44.754145       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0615 16:33:45.605616       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0615 16:33:45.605673       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0615 16:33:45.647245       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0615 16:33:45.647292       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0615 16:33:45.699650       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0615 16:33:45.699699       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0615 16:33:45.702358       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0615 16:33:45.702403       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0615 16:33:45.718371       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0615 16:33:45.718408       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0615 16:33:45.723261       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0615 16:33:45.723281       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0615 16:33:45.755043       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0615 16:33:45.755066       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0615 16:33:46.350596       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-06-15 16:33:29 UTC, ends at Thu 2023-06-15 16:52:01 UTC. --
	Jun 15 16:46:47 addons-477000 kubelet[2256]: E0615 16:46:47.331362    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 16:46:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 16:46:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 16:46:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 15 16:47:47 addons-477000 kubelet[2256]: E0615 16:47:47.332585    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 16:47:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 16:47:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 16:47:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 15 16:48:47 addons-477000 kubelet[2256]: W0615 16:48:47.317251    2256 machine.go:65] Cannot read vendor id correctly, set empty.
	Jun 15 16:48:47 addons-477000 kubelet[2256]: E0615 16:48:47.329501    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 16:48:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 16:48:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 16:48:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 15 16:49:47 addons-477000 kubelet[2256]: E0615 16:49:47.330743    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 16:49:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 16:49:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 16:49:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 15 16:50:47 addons-477000 kubelet[2256]: E0615 16:50:47.330157    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 16:50:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 16:50:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 16:50:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 15 16:51:47 addons-477000 kubelet[2256]: E0615 16:51:47.331536    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 16:51:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 16:51:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 16:51:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-477000 -n addons-477000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-477000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Registry FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Registry (720.86s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (0.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-477000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Non-zero exit: kubectl --context addons-477000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: exit status 1 (36.213583ms)

                                                
                                                
** stderr ** 
	error: no matching resources found

                                                
                                                
** /stderr **
addons_test.go:184: failed waiting for ingress-nginx-controller : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-477000 -n addons-477000
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-477000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-066000 | jenkins | v1.30.1 | 15 Jun 23 09:32 PDT |                     |
	|         | -p download-only-066000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-066000 | jenkins | v1.30.1 | 15 Jun 23 09:32 PDT |                     |
	|         | -p download-only-066000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT | 15 Jun 23 09:33 PDT |
	| delete  | -p download-only-066000        | download-only-066000 | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT | 15 Jun 23 09:33 PDT |
	| delete  | -p download-only-066000        | download-only-066000 | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT | 15 Jun 23 09:33 PDT |
	| start   | --download-only -p             | binary-mirror-062000 | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT |                     |
	|         | binary-mirror-062000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49310         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-062000        | binary-mirror-062000 | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT | 15 Jun 23 09:33 PDT |
	| start   | -p addons-477000               | addons-477000        | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT | 15 Jun 23 09:40 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-477000        | jenkins | v1.30.1 | 15 Jun 23 09:52 PDT |                     |
	|         | addons-477000                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/15 09:33:17
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0615 09:33:17.135905    1397 out.go:296] Setting OutFile to fd 1 ...
	I0615 09:33:17.136030    1397 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 09:33:17.136034    1397 out.go:309] Setting ErrFile to fd 2...
	I0615 09:33:17.136037    1397 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 09:33:17.136120    1397 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 09:33:17.137161    1397 out.go:303] Setting JSON to false
	I0615 09:33:17.152121    1397 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":168,"bootTime":1686846629,"procs":371,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 09:33:17.152202    1397 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 09:33:17.156891    1397 out.go:177] * [addons-477000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 09:33:17.159887    1397 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 09:33:17.163775    1397 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 09:33:17.159984    1397 notify.go:220] Checking for updates...
	I0615 09:33:17.171800    1397 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 09:33:17.174813    1397 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 09:33:17.177828    1397 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 09:33:17.180704    1397 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 09:33:17.183887    1397 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 09:33:17.187819    1397 out.go:177] * Using the qemu2 driver based on user configuration
	I0615 09:33:17.194783    1397 start.go:297] selected driver: qemu2
	I0615 09:33:17.194788    1397 start.go:884] validating driver "qemu2" against <nil>
	I0615 09:33:17.194794    1397 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 09:33:17.196752    1397 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0615 09:33:17.199793    1397 out.go:177] * Automatically selected the socket_vmnet network
	I0615 09:33:17.201307    1397 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 09:33:17.201331    1397 cni.go:84] Creating CNI manager for ""
	I0615 09:33:17.201335    1397 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 09:33:17.201341    1397 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0615 09:33:17.201349    1397 start_flags.go:319] config:
	{Name:addons-477000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-477000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 09:33:17.201434    1397 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 09:33:17.209829    1397 out.go:177] * Starting control plane node addons-477000 in cluster addons-477000
	I0615 09:33:17.213726    1397 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 09:33:17.213749    1397 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 09:33:17.213768    1397 cache.go:57] Caching tarball of preloaded images
	I0615 09:33:17.213824    1397 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 09:33:17.213830    1397 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 09:33:17.214051    1397 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/config.json ...
	I0615 09:33:17.214063    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/config.json: {Name:mkc1c34b82952aae697463d2d78c6ea098445790 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:17.214292    1397 start.go:365] acquiring machines lock for addons-477000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 09:33:17.214399    1397 start.go:369] acquired machines lock for "addons-477000" in 101.583µs
	I0615 09:33:17.214409    1397 start.go:93] Provisioning new machine with config: &{Name:addons-477000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.3 ClusterName:addons-477000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 09:33:17.214436    1397 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 09:33:17.221743    1397 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0615 09:33:17.568031    1397 start.go:159] libmachine.API.Create for "addons-477000" (driver="qemu2")
	I0615 09:33:17.568071    1397 client.go:168] LocalClient.Create starting
	I0615 09:33:17.568226    1397 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 09:33:17.626803    1397 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 09:33:17.737973    1397 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 09:33:17.902563    1397 main.go:141] libmachine: Creating SSH key...
	I0615 09:33:17.968617    1397 main.go:141] libmachine: Creating Disk image...
	I0615 09:33:17.968623    1397 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 09:33:17.969817    1397 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2
	I0615 09:33:18.004891    1397 main.go:141] libmachine: STDOUT: 
	I0615 09:33:18.004923    1397 main.go:141] libmachine: STDERR: 
	I0615 09:33:18.004982    1397 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2 +20000M
	I0615 09:33:18.012411    1397 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 09:33:18.012436    1397 main.go:141] libmachine: STDERR: 
	I0615 09:33:18.012455    1397 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2
	I0615 09:33:18.012466    1397 main.go:141] libmachine: Starting QEMU VM...
	I0615 09:33:18.012501    1397 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:25:cc:0f:2e:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2
	I0615 09:33:18.081537    1397 main.go:141] libmachine: STDOUT: 
	I0615 09:33:18.081558    1397 main.go:141] libmachine: STDERR: 
	I0615 09:33:18.081562    1397 main.go:141] libmachine: Attempt 0
	I0615 09:33:18.081577    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:20.083712    1397 main.go:141] libmachine: Attempt 1
	I0615 09:33:20.083961    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:22.086122    1397 main.go:141] libmachine: Attempt 2
	I0615 09:33:22.086166    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:24.088201    1397 main.go:141] libmachine: Attempt 3
	I0615 09:33:24.088224    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:26.090268    1397 main.go:141] libmachine: Attempt 4
	I0615 09:33:26.090325    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:28.092361    1397 main.go:141] libmachine: Attempt 5
	I0615 09:33:28.092379    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:30.094488    1397 main.go:141] libmachine: Attempt 6
	I0615 09:33:30.094575    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:30.094985    1397 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0615 09:33:30.095099    1397 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:25:cc:f:2e:6f ID:1,1a:25:cc:f:2e:6f Lease:0x648c8ed9}
	I0615 09:33:30.095124    1397 main.go:141] libmachine: Found match: 1a:25:cc:f:2e:6f
	I0615 09:33:30.095168    1397 main.go:141] libmachine: IP: 192.168.105.2
	I0615 09:33:30.095195    1397 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0615 09:33:32.115264    1397 machine.go:88] provisioning docker machine ...
	I0615 09:33:32.115338    1397 buildroot.go:166] provisioning hostname "addons-477000"
	I0615 09:33:32.116828    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.117588    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.117607    1397 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-477000 && echo "addons-477000" | sudo tee /etc/hostname
	I0615 09:33:32.199158    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-477000
	
	I0615 09:33:32.199283    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.199748    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.199763    1397 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-477000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-477000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-477000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0615 09:33:32.260846    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0615 09:33:32.260864    1397 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16718-868/.minikube CaCertPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16718-868/.minikube}
	I0615 09:33:32.260878    1397 buildroot.go:174] setting up certificates
	I0615 09:33:32.260906    1397 provision.go:83] configureAuth start
	I0615 09:33:32.260912    1397 provision.go:138] copyHostCerts
	I0615 09:33:32.261103    1397 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16718-868/.minikube/ca.pem (1078 bytes)
	I0615 09:33:32.261436    1397 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16718-868/.minikube/cert.pem (1123 bytes)
	I0615 09:33:32.262101    1397 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16718-868/.minikube/key.pem (1679 bytes)
	I0615 09:33:32.262442    1397 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16718-868/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca-key.pem org=jenkins.addons-477000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-477000]
	I0615 09:33:32.306279    1397 provision.go:172] copyRemoteCerts
	I0615 09:33:32.306343    1397 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0615 09:33:32.306360    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:33:32.335305    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0615 09:33:32.343471    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0615 09:33:32.351180    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0615 09:33:32.358490    1397 provision.go:86] duration metric: configureAuth took 97.576167ms
	I0615 09:33:32.358498    1397 buildroot.go:189] setting minikube options for container-runtime
	I0615 09:33:32.358950    1397 config.go:182] Loaded profile config "addons-477000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 09:33:32.358995    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.359216    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.359220    1397 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0615 09:33:32.410196    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0615 09:33:32.410204    1397 buildroot.go:70] root file system type: tmpfs
	I0615 09:33:32.410261    1397 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0615 09:33:32.410301    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.410550    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.410587    1397 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0615 09:33:32.468329    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0615 09:33:32.468380    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.468634    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.468643    1397 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0615 09:33:32.794674    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0615 09:33:32.794698    1397 machine.go:91] provisioned docker machine in 679.423792ms
	I0615 09:33:32.794704    1397 client.go:171] LocalClient.Create took 15.226996125s
	I0615 09:33:32.794723    1397 start.go:167] duration metric: libmachine.API.Create for "addons-477000" took 15.227064791s
	I0615 09:33:32.794726    1397 start.go:300] post-start starting for "addons-477000" (driver="qemu2")
	I0615 09:33:32.794731    1397 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0615 09:33:32.794812    1397 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0615 09:33:32.794822    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:33:32.823879    1397 ssh_runner.go:195] Run: cat /etc/os-release
	I0615 09:33:32.825122    1397 info.go:137] Remote host: Buildroot 2021.02.12
	I0615 09:33:32.825128    1397 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16718-868/.minikube/addons for local assets ...
	I0615 09:33:32.825196    1397 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16718-868/.minikube/files for local assets ...
	I0615 09:33:32.825222    1397 start.go:303] post-start completed in 30.494125ms
	I0615 09:33:32.825555    1397 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/config.json ...
	I0615 09:33:32.825707    1397 start.go:128] duration metric: createHost completed in 15.611646375s
	I0615 09:33:32.825734    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.825947    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.825951    1397 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0615 09:33:32.876753    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: 1686846813.320713543
	
	I0615 09:33:32.876760    1397 fix.go:206] guest clock: 1686846813.320713543
	I0615 09:33:32.876764    1397 fix.go:219] Guest: 2023-06-15 09:33:33.320713543 -0700 PDT Remote: 2023-06-15 09:33:32.825711 -0700 PDT m=+15.708594751 (delta=495.002543ms)
	I0615 09:33:32.876775    1397 fix.go:190] guest clock delta is within tolerance: 495.002543ms
	I0615 09:33:32.876778    1397 start.go:83] releasing machines lock for "addons-477000", held for 15.662753208s
	I0615 09:33:32.877060    1397 ssh_runner.go:195] Run: cat /version.json
	I0615 09:33:32.877067    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:33:32.877085    1397 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0615 09:33:32.877121    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:33:32.951587    1397 ssh_runner.go:195] Run: systemctl --version
	I0615 09:33:32.953983    1397 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0615 09:33:32.956008    1397 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0615 09:33:32.956040    1397 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0615 09:33:32.961754    1397 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0615 09:33:32.961761    1397 start.go:466] detecting cgroup driver to use...
	I0615 09:33:32.961877    1397 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0615 09:33:32.967359    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0615 09:33:32.970783    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0615 09:33:32.973872    1397 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0615 09:33:32.973908    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0615 09:33:32.976794    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0615 09:33:32.979871    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0615 09:33:32.983273    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0615 09:33:32.986847    1397 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0615 09:33:32.990009    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0615 09:33:32.992910    1397 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0615 09:33:32.995885    1397 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0615 09:33:32.999181    1397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 09:33:33.082046    1397 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0615 09:33:33.090305    1397 start.go:466] detecting cgroup driver to use...
	I0615 09:33:33.090367    1397 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0615 09:33:33.095444    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0615 09:33:33.099628    1397 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0615 09:33:33.106008    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0615 09:33:33.110583    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0615 09:33:33.115305    1397 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0615 09:33:33.157221    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0615 09:33:33.165685    1397 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0615 09:33:33.171700    1397 ssh_runner.go:195] Run: which cri-dockerd
	I0615 09:33:33.173347    1397 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0615 09:33:33.176671    1397 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0615 09:33:33.184036    1397 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0615 09:33:33.256172    1397 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0615 09:33:33.326477    1397 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0615 09:33:33.326492    1397 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0615 09:33:33.331797    1397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 09:33:33.394602    1397 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0615 09:33:34.551420    1397 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.1568305s)
	I0615 09:33:34.551480    1397 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0615 09:33:34.614918    1397 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0615 09:33:34.680379    1397 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0615 09:33:34.741670    1397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 09:33:34.802995    1397 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0615 09:33:34.810702    1397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 09:33:34.876193    1397 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0615 09:33:34.899281    1397 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0615 09:33:34.899375    1397 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0615 09:33:34.902005    1397 start.go:534] Will wait 60s for crictl version
	I0615 09:33:34.902039    1397 ssh_runner.go:195] Run: which crictl
	I0615 09:33:34.903665    1397 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0615 09:33:34.922827    1397 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0615 09:33:34.922910    1397 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0615 09:33:34.936535    1397 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0615 09:33:34.948006    1397 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.2 ...
	I0615 09:33:34.948101    1397 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0615 09:33:34.949468    1397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0615 09:33:34.953059    1397 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 09:33:34.953103    1397 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0615 09:33:34.958178    1397 docker.go:636] Got preloaded images: 
	I0615 09:33:34.958185    1397 docker.go:642] registry.k8s.io/kube-apiserver:v1.27.3 wasn't preloaded
	I0615 09:33:34.958223    1397 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0615 09:33:34.960919    1397 ssh_runner.go:195] Run: which lz4
	I0615 09:33:34.962156    1397 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0615 09:33:34.963566    1397 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0615 09:33:34.963580    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635922 bytes)
	I0615 09:33:36.258372    1397 docker.go:600] Took 1.296282 seconds to copy over tarball
	I0615 09:33:36.258440    1397 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0615 09:33:37.363016    1397 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.104582708s)
	I0615 09:33:37.363034    1397 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0615 09:33:37.379849    1397 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0615 09:33:37.383479    1397 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0615 09:33:37.388752    1397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 09:33:37.449408    1397 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0615 09:33:38.998025    1397 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.548624125s)
	I0615 09:33:38.998130    1397 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0615 09:33:39.004063    1397 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0615 09:33:39.004075    1397 cache_images.go:84] Images are preloaded, skipping loading
	I0615 09:33:39.004149    1397 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0615 09:33:39.011968    1397 cni.go:84] Creating CNI manager for ""
	I0615 09:33:39.011977    1397 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 09:33:39.012000    1397 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0615 09:33:39.012011    1397 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-477000 NodeName:addons-477000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0615 09:33:39.012111    1397 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-477000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0615 09:33:39.012156    1397 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-477000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:addons-477000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0615 09:33:39.012203    1397 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0615 09:33:39.015681    1397 binaries.go:44] Found k8s binaries, skipping transfer
	I0615 09:33:39.015718    1397 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0615 09:33:39.018849    1397 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0615 09:33:39.023920    1397 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0615 09:33:39.028733    1397 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0615 09:33:39.033571    1397 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0615 09:33:39.034818    1397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0615 09:33:39.038909    1397 certs.go:56] Setting up /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000 for IP: 192.168.105.2
	I0615 09:33:39.038918    1397 certs.go:190] acquiring lock for shared ca certs: {Name:mk9ee4d7ca68f2cc32c8609d33f6ce33c43a91d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.039073    1397 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key
	I0615 09:33:39.109209    1397 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt ...
	I0615 09:33:39.109214    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt: {Name:mka7538e8370ad0560f47e28d206b077e2dbbef8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.109425    1397 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key ...
	I0615 09:33:39.109428    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key: {Name:mkca6c7de675216938ac1a6663738af412e2d280 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.109532    1397 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key
	I0615 09:33:39.219574    1397 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.crt ...
	I0615 09:33:39.219577    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.crt: {Name:mk21a595039c96735254391e5270364a73e52306 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.219709    1397 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key ...
	I0615 09:33:39.219712    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key: {Name:mk96cab9f1987887c2b313cd365bdba518ec818b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.219826    1397 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.key
	I0615 09:33:39.219831    1397 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt with IP's: []
	I0615 09:33:39.435828    1397 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt ...
	I0615 09:33:39.435835    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt: {Name:mk0f2105a4c5fdba007e9c77c7945365dc3f96af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.436029    1397 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.key ...
	I0615 09:33:39.436031    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.key: {Name:mk65e491f4b4c1ee8d05045efb9265b2c697a551 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.436124    1397 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key.96055969
	I0615 09:33:39.436133    1397 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0615 09:33:39.510125    1397 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt.96055969 ...
	I0615 09:33:39.510129    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt.96055969: {Name:mk7c90d062166950585957cb3f0ce136594c9cd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.510277    1397 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key.96055969 ...
	I0615 09:33:39.510280    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key.96055969: {Name:mk4266445b8f2d5bc078d169ee24b8765955e2f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.510384    1397 certs.go:337] copying /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt
	I0615 09:33:39.510598    1397 certs.go:341] copying /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key
	I0615 09:33:39.510713    1397 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.key
	I0615 09:33:39.510723    1397 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.crt with IP's: []
	I0615 09:33:39.610633    1397 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.crt ...
	I0615 09:33:39.610637    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.crt: {Name:mk04e06c13fe3eccffb62f328096a02f5668baa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.610779    1397 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.key ...
	I0615 09:33:39.610783    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.key: {Name:mk21e3c8e84fcac9a2d9da5e0fa06b26ad1ee7cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.611042    1397 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca-key.pem (1679 bytes)
	I0615 09:33:39.611072    1397 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem (1078 bytes)
	I0615 09:33:39.611094    1397 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem (1123 bytes)
	I0615 09:33:39.611429    1397 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/key.pem (1679 bytes)
	I0615 09:33:39.611960    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0615 09:33:39.619546    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0615 09:33:39.626711    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0615 09:33:39.633501    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0615 09:33:39.640010    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0615 09:33:39.647112    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0615 09:33:39.654063    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0615 09:33:39.660582    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0615 09:33:39.667533    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0615 09:33:39.674545    1397 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0615 09:33:39.680339    1397 ssh_runner.go:195] Run: openssl version
	I0615 09:33:39.682379    1397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0615 09:33:39.685404    1397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0615 09:33:39.686832    1397 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 15 16:33 /usr/share/ca-certificates/minikubeCA.pem
	I0615 09:33:39.686855    1397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0615 09:33:39.688703    1397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0615 09:33:39.691957    1397 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0615 09:33:39.693381    1397 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0615 09:33:39.693418    1397 kubeadm.go:404] StartCluster: {Name:addons-477000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.3 ClusterName:addons-477000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 09:33:39.693485    1397 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0615 09:33:39.699291    1397 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0615 09:33:39.702928    1397 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0615 09:33:39.706168    1397 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0615 09:33:39.708902    1397 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0615 09:33:39.708925    1397 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0615 09:33:39.731022    1397 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0615 09:33:39.731051    1397 kubeadm.go:322] [preflight] Running pre-flight checks
	I0615 09:33:39.787198    1397 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0615 09:33:39.787252    1397 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0615 09:33:39.787291    1397 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0615 09:33:39.845524    1397 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0615 09:33:39.853720    1397 out.go:204]   - Generating certificates and keys ...
	I0615 09:33:39.853771    1397 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0615 09:33:39.853800    1397 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0615 09:33:40.047052    1397 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0615 09:33:40.281668    1397 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0615 09:33:40.373604    1397 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0615 09:33:40.496002    1397 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0615 09:33:40.752895    1397 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0615 09:33:40.752975    1397 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-477000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0615 09:33:40.889354    1397 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0615 09:33:40.889424    1397 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-477000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0615 09:33:40.967392    1397 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0615 09:33:41.132487    1397 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0615 09:33:41.175551    1397 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0615 09:33:41.175583    1397 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0615 09:33:41.275708    1397 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0615 09:33:41.313261    1397 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0615 09:33:41.394612    1397 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0615 09:33:41.488793    1397 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0615 09:33:41.495623    1397 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0615 09:33:41.495672    1397 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0615 09:33:41.495691    1397 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0615 09:33:41.565044    1397 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0615 09:33:41.570236    1397 out.go:204]   - Booting up control plane ...
	I0615 09:33:41.570302    1397 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0615 09:33:41.570344    1397 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0615 09:33:41.570389    1397 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0615 09:33:41.570430    1397 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0615 09:33:41.570514    1397 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0615 09:33:45.571765    1397 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.003295 seconds
	I0615 09:33:45.571857    1397 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0615 09:33:45.577408    1397 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0615 09:33:46.094757    1397 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0615 09:33:46.095006    1397 kubeadm.go:322] [mark-control-plane] Marking the node addons-477000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0615 09:33:46.613385    1397 kubeadm.go:322] [bootstrap-token] Using token: f4kg8y.q60xaa2tn5uwspbb
	I0615 09:33:46.619341    1397 out.go:204]   - Configuring RBAC rules ...
	I0615 09:33:46.619403    1397 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0615 09:33:46.620813    1397 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0615 09:33:46.624663    1397 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0615 09:33:46.625913    1397 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0615 09:33:46.627185    1397 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0615 09:33:46.628306    1397 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0615 09:33:46.632523    1397 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0615 09:33:46.806353    1397 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0615 09:33:47.022461    1397 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0615 09:33:47.022733    1397 kubeadm.go:322] 
	I0615 09:33:47.022774    1397 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0615 09:33:47.022780    1397 kubeadm.go:322] 
	I0615 09:33:47.022834    1397 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0615 09:33:47.022839    1397 kubeadm.go:322] 
	I0615 09:33:47.022851    1397 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0615 09:33:47.022879    1397 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0615 09:33:47.022912    1397 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0615 09:33:47.022916    1397 kubeadm.go:322] 
	I0615 09:33:47.022952    1397 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0615 09:33:47.022958    1397 kubeadm.go:322] 
	I0615 09:33:47.022992    1397 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0615 09:33:47.022995    1397 kubeadm.go:322] 
	I0615 09:33:47.023016    1397 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0615 09:33:47.023050    1397 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0615 09:33:47.023081    1397 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0615 09:33:47.023083    1397 kubeadm.go:322] 
	I0615 09:33:47.023121    1397 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0615 09:33:47.023158    1397 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0615 09:33:47.023161    1397 kubeadm.go:322] 
	I0615 09:33:47.023197    1397 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token f4kg8y.q60xaa2tn5uwspbb \
	I0615 09:33:47.023261    1397 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:13ec1e4f42825d0267b189c0f8b830eda54e60d63681b653b209256c58602b59 \
	I0615 09:33:47.023273    1397 kubeadm.go:322] 	--control-plane 
	I0615 09:33:47.023278    1397 kubeadm.go:322] 
	I0615 09:33:47.023320    1397 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0615 09:33:47.023326    1397 kubeadm.go:322] 
	I0615 09:33:47.023380    1397 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token f4kg8y.q60xaa2tn5uwspbb \
	I0615 09:33:47.023443    1397 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:13ec1e4f42825d0267b189c0f8b830eda54e60d63681b653b209256c58602b59 
	I0615 09:33:47.023525    1397 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0615 09:33:47.023586    1397 cni.go:84] Creating CNI manager for ""
	I0615 09:33:47.023594    1397 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 09:33:47.031274    1397 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0615 09:33:47.035321    1397 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0615 09:33:47.038799    1397 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0615 09:33:47.043709    1397 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0615 09:33:47.043747    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:47.043800    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=1814abc5e40384accb8747bfb7e33027343c9627 minikube.k8s.io/name=addons-477000 minikube.k8s.io/updated_at=2023_06_15T09_33_47_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:47.095849    1397 ops.go:34] apiserver oom_adj: -16
	I0615 09:33:47.095898    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:47.645147    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:48.145079    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:48.645093    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:49.145044    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:49.645148    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:50.145134    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:50.645328    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:51.145310    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:51.645116    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:52.144609    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:52.645278    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:53.145243    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:53.645239    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:54.145000    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:54.644744    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:55.145233    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:55.644949    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:56.145008    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:56.644938    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:57.143430    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:57.645224    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:58.144898    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:58.644909    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:59.144773    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:59.644338    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:34:00.144834    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:34:00.206574    1397 kubeadm.go:1081] duration metric: took 13.163176875s to wait for elevateKubeSystemPrivileges.
	I0615 09:34:00.206587    1397 kubeadm.go:406] StartCluster complete in 20.513668625s
	I0615 09:34:00.206614    1397 settings.go:142] acquiring lock: {Name:mk45a698fcd8dd8ae6984c9cf4ad4d183fdb5424 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:34:00.206769    1397 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 09:34:00.206961    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/kubeconfig: {Name:mkbe9cac04fb467055323f2e3d5db2c6ddc287ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:34:00.207185    1397 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0615 09:34:00.207249    1397 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0615 09:34:00.207317    1397 addons.go:66] Setting ingress=true in profile "addons-477000"
	I0615 09:34:00.207322    1397 addons.go:66] Setting ingress-dns=true in profile "addons-477000"
	I0615 09:34:00.207325    1397 addons.go:228] Setting addon ingress=true in "addons-477000"
	I0615 09:34:00.207327    1397 addons.go:228] Setting addon ingress-dns=true in "addons-477000"
	I0615 09:34:00.207355    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207358    1397 addons.go:66] Setting cloud-spanner=true in profile "addons-477000"
	I0615 09:34:00.207362    1397 addons.go:228] Setting addon cloud-spanner=true in "addons-477000"
	I0615 09:34:00.207371    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207402    1397 addons.go:66] Setting metrics-server=true in profile "addons-477000"
	I0615 09:34:00.207421    1397 addons.go:66] Setting registry=true in profile "addons-477000"
	I0615 09:34:00.207457    1397 addons.go:228] Setting addon registry=true in "addons-477000"
	I0615 09:34:00.207434    1397 addons.go:66] Setting inspektor-gadget=true in profile "addons-477000"
	I0615 09:34:00.207483    1397 addons.go:228] Setting addon inspektor-gadget=true in "addons-477000"
	I0615 09:34:00.207494    1397 addons.go:228] Setting addon metrics-server=true in "addons-477000"
	I0615 09:34:00.207502    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207355    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207433    1397 addons.go:66] Setting default-storageclass=true in profile "addons-477000"
	I0615 09:34:00.207531    1397 config.go:182] Loaded profile config "addons-477000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 09:34:00.207537    1397 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-477000"
	I0615 09:34:00.207575    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207475    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207436    1397 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-477000"
	I0615 09:34:00.207676    1397 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-477000"
	I0615 09:34:00.207687    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207317    1397 addons.go:66] Setting volumesnapshots=true in profile "addons-477000"
	I0615 09:34:00.207735    1397 addons.go:228] Setting addon volumesnapshots=true in "addons-477000"
	I0615 09:34:00.207746    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207438    1397 addons.go:66] Setting gcp-auth=true in profile "addons-477000"
	I0615 09:34:00.207776    1397 mustload.go:65] Loading cluster: addons-477000
	I0615 09:34:00.207433    1397 addons.go:66] Setting storage-provisioner=true in profile "addons-477000"
	I0615 09:34:00.208143    1397 addons.go:228] Setting addon storage-provisioner=true in "addons-477000"
	I0615 09:34:00.208157    1397 host.go:66] Checking if "addons-477000" exists ...
	W0615 09:34:00.208299    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208325    1397 addons.go:274] "addons-477000" is not running, setting registry=true and skipping enablement (err=<nil>)
	W0615 09:34:00.208331    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208338    1397 addons.go:274] "addons-477000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	I0615 09:34:00.208333    1397 addons.go:464] Verifying addon registry=true in "addons-477000"
	I0615 09:34:00.211629    1397 out.go:177] * Verifying registry addon...
	W0615 09:34:00.208373    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	I0615 09:34:00.207993    1397 config.go:182] Loaded profile config "addons-477000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	W0615 09:34:00.208397    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208133    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208437    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208601    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208662    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	I0615 09:34:00.215730    1397 addons.go:228] Setting addon default-storageclass=true in "addons-477000"
	W0615 09:34:00.218550    1397 addons.go:274] "addons-477000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	W0615 09:34:00.218559    1397 addons.go:274] "addons-477000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	W0615 09:34:00.218576    1397 addons.go:274] "addons-477000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	W0615 09:34:00.218576    1397 addons.go:274] "addons-477000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	W0615 09:34:00.218580    1397 addons.go:274] "addons-477000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	W0615 09:34:00.218593    1397 addons.go:274] "addons-477000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	I0615 09:34:00.218944    1397 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0615 09:34:00.219273    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.221576    1397 addons.go:464] Verifying addon metrics-server=true in "addons-477000"
	I0615 09:34:00.221583    1397 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0615 09:34:00.224623    1397 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0615 09:34:00.224631    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0615 09:34:00.224638    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:34:00.221629    1397 addons.go:464] Verifying addon ingress=true in "addons-477000"
	I0615 09:34:00.221636    1397 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-477000"
	I0615 09:34:00.221723    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.228543    1397 out.go:177] * Verifying ingress addon...
	I0615 09:34:00.238557    1397 out.go:177] * Verifying csi-hostpath-driver addon...
	I0615 09:34:00.229248    1397 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0615 09:34:00.235988    1397 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0615 09:34:00.241352    1397 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0615 09:34:00.242560    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0615 09:34:00.242594    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:34:00.242973    1397 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0615 09:34:00.245385    1397 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0615 09:34:00.251897    1397 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0615 09:34:00.275682    1397 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0615 09:34:00.278596    1397 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0615 09:34:00.278604    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0615 09:34:00.310787    1397 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0615 09:34:00.310799    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0615 09:34:00.340750    1397 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0615 09:34:00.340763    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0615 09:34:00.370147    1397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0615 09:34:00.374756    1397 addons.go:420] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0615 09:34:00.374766    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0615 09:34:00.391483    1397 addons.go:420] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0615 09:34:00.391493    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0615 09:34:00.396735    1397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0615 09:34:00.725129    1397 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-477000" context rescaled to 1 replicas
	I0615 09:34:00.725155    1397 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 09:34:00.731827    1397 out.go:177] * Verifying Kubernetes components...
	I0615 09:34:00.735986    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0615 09:34:01.130555    1397 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	W0615 09:34:01.273320    1397 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0615 09:34:01.273347    1397 retry.go:31] will retry after 358.412085ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0615 09:34:01.273795    1397 node_ready.go:35] waiting up to 6m0s for node "addons-477000" to be "Ready" ...
	I0615 09:34:01.275272    1397 node_ready.go:49] node "addons-477000" has status "Ready":"True"
	I0615 09:34:01.275281    1397 node_ready.go:38] duration metric: took 1.477792ms waiting for node "addons-477000" to be "Ready" ...
	I0615 09:34:01.275284    1397 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0615 09:34:01.279498    1397 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-c5j9m" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:01.633151    1397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0615 09:34:02.299497    1397 pod_ready.go:92] pod "coredns-5d78c9869d-c5j9m" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.299518    1397 pod_ready.go:81] duration metric: took 1.020034208s waiting for pod "coredns-5d78c9869d-c5j9m" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.299526    1397 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-mds5s" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.303736    1397 pod_ready.go:92] pod "coredns-5d78c9869d-mds5s" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.303743    1397 pod_ready.go:81] duration metric: took 4.212458ms waiting for pod "coredns-5d78c9869d-mds5s" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.303749    1397 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.307230    1397 pod_ready.go:92] pod "etcd-addons-477000" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.307237    1397 pod_ready.go:81] duration metric: took 3.484042ms waiting for pod "etcd-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.307243    1397 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.311004    1397 pod_ready.go:92] pod "kube-apiserver-addons-477000" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.311013    1397 pod_ready.go:81] duration metric: took 3.766916ms waiting for pod "kube-apiserver-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.311019    1397 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.481809    1397 pod_ready.go:92] pod "kube-controller-manager-addons-477000" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.481828    1397 pod_ready.go:81] duration metric: took 170.807958ms waiting for pod "kube-controller-manager-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.481838    1397 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8rgcs" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.883885    1397 pod_ready.go:92] pod "kube-proxy-8rgcs" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.883919    1397 pod_ready.go:81] duration metric: took 402.082375ms waiting for pod "kube-proxy-8rgcs" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.883933    1397 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:03.277736    1397 pod_ready.go:92] pod "kube-scheduler-addons-477000" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:03.277748    1397 pod_ready.go:81] duration metric: took 393.817875ms waiting for pod "kube-scheduler-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:03.277754    1397 pod_ready.go:38] duration metric: took 2.002511417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0615 09:34:03.277768    1397 api_server.go:52] waiting for apiserver process to appear ...
	I0615 09:34:03.277845    1397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0615 09:34:03.986804    1397 api_server.go:72] duration metric: took 3.261712416s to wait for apiserver process to appear ...
	I0615 09:34:03.986816    1397 api_server.go:88] waiting for apiserver healthz status ...
	I0615 09:34:03.986824    1397 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0615 09:34:03.986882    1397 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.353767917s)
	I0615 09:34:03.990093    1397 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0615 09:34:03.990734    1397 api_server.go:141] control plane version: v1.27.3
	I0615 09:34:03.990742    1397 api_server.go:131] duration metric: took 3.923291ms to wait for apiserver health ...
	I0615 09:34:03.990745    1397 system_pods.go:43] waiting for kube-system pods to appear ...
	I0615 09:34:03.993833    1397 system_pods.go:59] 9 kube-system pods found
	I0615 09:34:03.993840    1397 system_pods.go:61] "coredns-5d78c9869d-c5j9m" [029d925c-5975-4103-a809-e22de9cdf1be] Running
	I0615 09:34:03.993843    1397 system_pods.go:61] "coredns-5d78c9869d-mds5s" [6851e0cd-a4cb-4c11-8b37-6e07dedcb90e] Running
	I0615 09:34:03.993845    1397 system_pods.go:61] "etcd-addons-477000" [41610910-d77e-4a7d-8f06-bcf205880f06] Running
	I0615 09:34:03.993848    1397 system_pods.go:61] "kube-apiserver-addons-477000" [49f23173-0ac0-4711-9c55-a96c00aa0881] Running
	I0615 09:34:03.993851    1397 system_pods.go:61] "kube-controller-manager-addons-477000" [befaa390-5c21-4b4a-930f-d4c4c3559d3d] Running
	I0615 09:34:03.993853    1397 system_pods.go:61] "kube-proxy-8rgcs" [6b460d21-5a79-40c6-84a1-e0551f8e91b9] Running
	I0615 09:34:03.993855    1397 system_pods.go:61] "kube-scheduler-addons-477000" [995baab0-4f21-4690-baf3-4c4682320ca6] Running
	I0615 09:34:03.993859    1397 system_pods.go:61] "snapshot-controller-75bbb956b9-p6hk4" [f6c07a51-e0a4-4ed6-a2fb-f927be1aa9f7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0615 09:34:03.993864    1397 system_pods.go:61] "snapshot-controller-75bbb956b9-prqv8" [368ee712-d2ca-4c1b-9d87-bffa3c354d7c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0615 09:34:03.993866    1397 system_pods.go:74] duration metric: took 3.119166ms to wait for pod list to return data ...
	I0615 09:34:03.993869    1397 default_sa.go:34] waiting for default service account to be created ...
	I0615 09:34:03.995049    1397 default_sa.go:45] found service account: "default"
	I0615 09:34:03.995055    1397 default_sa.go:55] duration metric: took 1.183708ms for default service account to be created ...
	I0615 09:34:03.995057    1397 system_pods.go:116] waiting for k8s-apps to be running ...
	I0615 09:34:03.998400    1397 system_pods.go:86] 9 kube-system pods found
	I0615 09:34:03.998409    1397 system_pods.go:89] "coredns-5d78c9869d-c5j9m" [029d925c-5975-4103-a809-e22de9cdf1be] Running
	I0615 09:34:03.998411    1397 system_pods.go:89] "coredns-5d78c9869d-mds5s" [6851e0cd-a4cb-4c11-8b37-6e07dedcb90e] Running
	I0615 09:34:03.998414    1397 system_pods.go:89] "etcd-addons-477000" [41610910-d77e-4a7d-8f06-bcf205880f06] Running
	I0615 09:34:03.998416    1397 system_pods.go:89] "kube-apiserver-addons-477000" [49f23173-0ac0-4711-9c55-a96c00aa0881] Running
	I0615 09:34:03.998419    1397 system_pods.go:89] "kube-controller-manager-addons-477000" [befaa390-5c21-4b4a-930f-d4c4c3559d3d] Running
	I0615 09:34:03.998421    1397 system_pods.go:89] "kube-proxy-8rgcs" [6b460d21-5a79-40c6-84a1-e0551f8e91b9] Running
	I0615 09:34:03.998424    1397 system_pods.go:89] "kube-scheduler-addons-477000" [995baab0-4f21-4690-baf3-4c4682320ca6] Running
	I0615 09:34:03.998429    1397 system_pods.go:89] "snapshot-controller-75bbb956b9-p6hk4" [f6c07a51-e0a4-4ed6-a2fb-f927be1aa9f7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0615 09:34:03.998433    1397 system_pods.go:89] "snapshot-controller-75bbb956b9-prqv8" [368ee712-d2ca-4c1b-9d87-bffa3c354d7c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0615 09:34:03.998436    1397 system_pods.go:126] duration metric: took 3.376208ms to wait for k8s-apps to be running ...
	I0615 09:34:03.998439    1397 system_svc.go:44] waiting for kubelet service to be running ....
	I0615 09:34:03.998489    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0615 09:34:04.003913    1397 system_svc.go:56] duration metric: took 5.471458ms WaitForService to wait for kubelet.
	I0615 09:34:04.003921    1397 kubeadm.go:581] duration metric: took 3.278833625s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0615 09:34:04.003932    1397 node_conditions.go:102] verifying NodePressure condition ...
	I0615 09:34:04.077208    1397 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0615 09:34:04.077239    1397 node_conditions.go:123] node cpu capacity is 2
	I0615 09:34:04.077244    1397 node_conditions.go:105] duration metric: took 73.311333ms to run NodePressure ...
	I0615 09:34:04.077249    1397 start.go:228] waiting for startup goroutines ...
	I0615 09:34:06.831960    1397 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0615 09:34:06.832053    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:34:06.882622    1397 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0615 09:34:06.891297    1397 addons.go:228] Setting addon gcp-auth=true in "addons-477000"
	I0615 09:34:06.891339    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:06.892599    1397 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0615 09:34:06.892612    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:34:06.928262    1397 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0615 09:34:06.932997    1397 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0615 09:34:06.937187    1397 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0615 09:34:06.937194    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0615 09:34:06.943495    1397 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0615 09:34:06.943502    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0615 09:34:06.949337    1397 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0615 09:34:06.949343    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0615 09:34:06.954968    1397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0615 09:34:07.475178    1397 addons.go:464] Verifying addon gcp-auth=true in "addons-477000"
	I0615 09:34:07.478304    1397 out.go:177] * Verifying gcp-auth addon...
	I0615 09:34:07.485666    1397 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0615 09:34:07.491991    1397 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0615 09:34:07.492002    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:07.996710    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:08.496921    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:09.002133    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:09.494606    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:09.995080    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:10.495704    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:10.995530    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:11.495877    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:12.001470    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:12.497446    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:13.001473    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:13.502268    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:13.997362    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:14.503184    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:14.997798    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:15.495991    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:16.000278    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:16.501895    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:17.001719    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:17.495416    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:17.995757    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:18.496835    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:19.002478    1397 kapi.go:107] duration metric: took 11.51706925s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0615 09:34:19.008171    1397 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-477000 cluster.
	I0615 09:34:19.011889    1397 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0615 09:34:19.016120    1397 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0615 09:40:00.215049    1397 kapi.go:107] duration metric: took 6m0.00478975s to wait for kubernetes.io/minikube-addons=registry ...
	W0615 09:40:00.215455    1397 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0615 09:40:00.235888    1397 kapi.go:107] duration metric: took 6m0.001641708s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0615 09:40:00.235937    1397 kapi.go:107] duration metric: took 6m0.008679083s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0615 09:40:00.236028    1397 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	W0615 09:40:00.236088    1397 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0615 09:40:00.243957    1397 out.go:177] * Enabled addons: inspektor-gadget, metrics-server, cloud-spanner, ingress-dns, storage-provisioner, default-storageclass, volumesnapshots, gcp-auth
	I0615 09:40:00.250970    1397 addons.go:499] enable addons completed in 6m0.052449292s: enabled=[inspektor-gadget metrics-server cloud-spanner ingress-dns storage-provisioner default-storageclass volumesnapshots gcp-auth]
	I0615 09:40:00.251042    1397 start.go:233] waiting for cluster config update ...
	I0615 09:40:00.251069    1397 start.go:242] writing updated cluster config ...
	I0615 09:40:00.255738    1397 ssh_runner.go:195] Run: rm -f paused
	I0615 09:40:00.403218    1397 start.go:582] kubectl: 1.25.9, cluster: 1.27.3 (minor skew: 2)
	I0615 09:40:00.405982    1397 out.go:177] 
	W0615 09:40:00.410033    1397 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.3.
	I0615 09:40:00.413868    1397 out.go:177]   - Want kubectl v1.27.3? Try 'minikube kubectl -- get pods -A'
	I0615 09:40:00.421952    1397 out.go:177] * Done! kubectl is now configured to use "addons-477000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-06-15 16:33:29 UTC, ends at Thu 2023-06-15 17:01:53 UTC. --
	Jun 15 16:34:12 addons-477000 dockerd[1097]: time="2023-06-15T16:34:12.712778061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 15 16:34:12 addons-477000 dockerd[1097]: time="2023-06-15T16:34:12.712798672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 16:34:12 addons-477000 dockerd[1091]: time="2023-06-15T16:34:12.754255249Z" level=info msg="ignoring event" container=784517c6a1ba995d5a0ac3e06a0c3b3112e1ab80318bad66824ce915bcadf04c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 15 16:34:12 addons-477000 dockerd[1097]: time="2023-06-15T16:34:12.754320625Z" level=info msg="shim disconnected" id=784517c6a1ba995d5a0ac3e06a0c3b3112e1ab80318bad66824ce915bcadf04c namespace=moby
	Jun 15 16:34:12 addons-477000 dockerd[1097]: time="2023-06-15T16:34:12.754344286Z" level=warning msg="cleaning up after shim disconnected" id=784517c6a1ba995d5a0ac3e06a0c3b3112e1ab80318bad66824ce915bcadf04c namespace=moby
	Jun 15 16:34:12 addons-477000 dockerd[1097]: time="2023-06-15T16:34:12.754349480Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 15 16:34:13 addons-477000 dockerd[1097]: time="2023-06-15T16:34:13.793607046Z" level=info msg="shim disconnected" id=84f78f0eaf8dda7e1caea638bf41a2296e9ad897543494ce93c4491f7a6bcb47 namespace=moby
	Jun 15 16:34:13 addons-477000 dockerd[1091]: time="2023-06-15T16:34:13.793691952Z" level=info msg="ignoring event" container=84f78f0eaf8dda7e1caea638bf41a2296e9ad897543494ce93c4491f7a6bcb47 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 15 16:34:13 addons-477000 dockerd[1097]: time="2023-06-15T16:34:13.794164777Z" level=warning msg="cleaning up after shim disconnected" id=84f78f0eaf8dda7e1caea638bf41a2296e9ad897543494ce93c4491f7a6bcb47 namespace=moby
	Jun 15 16:34:13 addons-477000 dockerd[1097]: time="2023-06-15T16:34:13.794176995Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 15 16:34:14 addons-477000 dockerd[1091]: time="2023-06-15T16:34:14.817608715Z" level=info msg="ignoring event" container=08da339f6e51c0a36beb905cf51aaa6ebe12eed98abda59168295fcb62adf9d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 15 16:34:14 addons-477000 dockerd[1097]: time="2023-06-15T16:34:14.817442834Z" level=info msg="shim disconnected" id=08da339f6e51c0a36beb905cf51aaa6ebe12eed98abda59168295fcb62adf9d0 namespace=moby
	Jun 15 16:34:14 addons-477000 dockerd[1097]: time="2023-06-15T16:34:14.819525140Z" level=warning msg="cleaning up after shim disconnected" id=08da339f6e51c0a36beb905cf51aaa6ebe12eed98abda59168295fcb62adf9d0 namespace=moby
	Jun 15 16:34:14 addons-477000 dockerd[1097]: time="2023-06-15T16:34:14.819574276Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 15 16:34:15 addons-477000 dockerd[1097]: time="2023-06-15T16:34:15.579995441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 15 16:34:15 addons-477000 dockerd[1097]: time="2023-06-15T16:34:15.580279599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 16:34:15 addons-477000 dockerd[1097]: time="2023-06-15T16:34:15.580317285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 15 16:34:15 addons-477000 dockerd[1097]: time="2023-06-15T16:34:15.580357122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 16:34:15 addons-477000 cri-dockerd[991]: time="2023-06-15T16:34:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0c316c00ee755585c1753e0f1d6364e1731871da5d072484c67c43cac67cd349/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 15 16:34:15 addons-477000 dockerd[1091]: time="2023-06-15T16:34:15.926803390Z" level=warning msg="reference for unknown type: " digest="sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Jun 15 16:34:18 addons-477000 cri-dockerd[991]: time="2023-06-15T16:34:18Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Jun 15 16:34:18 addons-477000 dockerd[1097]: time="2023-06-15T16:34:18.269480431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 15 16:34:18 addons-477000 dockerd[1097]: time="2023-06-15T16:34:18.269881621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 16:34:18 addons-477000 dockerd[1097]: time="2023-06-15T16:34:18.269894061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 15 16:34:18 addons-477000 dockerd[1097]: time="2023-06-15T16:34:18.269898788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                     CREATED             STATE               NAME                         ATTEMPT             POD ID
	6a4bcd8ac64ff       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf              27 minutes ago      Running             gcp-auth                     0                   0c316c00ee755
	8527d6f42bef1       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   27 minutes ago      Running             volume-snapshot-controller   0                   f6bd41ad4abf6
	06a9dab9c48b6       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   27 minutes ago      Running             volume-snapshot-controller   0                   629404aaee996
	256eaaad3894a       97e04611ad434                                                                                                             27 minutes ago      Running             coredns                      0                   f6fc2a0d05c4a
	29b72a92c6578       fb73e92641fd5                                                                                                             27 minutes ago      Running             kube-proxy                   0                   405ca9198a355
	733213e41e3b9       bcb9e554eaab6                                                                                                             28 minutes ago      Running             kube-scheduler               0                   25817e506c78b
	b11fb0f325644       39dfb036b0986                                                                                                             28 minutes ago      Running             kube-apiserver               0                   0dde73a500899
	66de98cb24ea0       ab3683b584ae5                                                                                                             28 minutes ago      Running             kube-controller-manager      0                   69ef168f52131
	41a6909f99a59       24bc64e911039                                                                                                             28 minutes ago      Running             etcd                         0                   9b969e901cc05
	
	* 
	* ==> coredns [256eaaad3894] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:55502 - 31535 "HINFO IN 8156761713541019547.3807690688336836625. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.006087175s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-477000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-477000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1814abc5e40384accb8747bfb7e33027343c9627
	                    minikube.k8s.io/name=addons-477000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_15T09_33_47_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Jun 2023 16:33:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-477000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Jun 2023 17:01:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Jun 2023 17:00:23 +0000   Thu, 15 Jun 2023 16:33:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Jun 2023 17:00:23 +0000   Thu, 15 Jun 2023 16:33:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Jun 2023 17:00:23 +0000   Thu, 15 Jun 2023 16:33:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Jun 2023 17:00:23 +0000   Thu, 15 Jun 2023 16:33:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-477000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 5009a87f17804889a5a4616073b937e0
	  System UUID:                5009a87f17804889a5a4616073b937e0
	  Boot ID:                    9630f686-3c90-436f-98e6-d8c6686f510a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-58478865f7-2pgxv                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-5d78c9869d-mds5s                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     27m
	  kube-system                 etcd-addons-477000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-addons-477000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-addons-477000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-8rgcs                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-addons-477000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 snapshot-controller-75bbb956b9-p6hk4     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 snapshot-controller-75bbb956b9-prqv8     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27m                kube-proxy       
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node addons-477000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node addons-477000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node addons-477000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node addons-477000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node addons-477000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node addons-477000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                28m                kubelet          Node addons-477000 status is now: NodeReady
	  Normal  RegisteredNode           27m                node-controller  Node addons-477000 event: Registered Node addons-477000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] KASLR disabled due to lack of seed
	[  +0.687452] EINJ: EINJ table not found.
	[  +0.627011] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043411] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000812] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.868427] systemd-fstab-generator[480]: Ignoring "noauto" for root device
	[  +0.067044] systemd-fstab-generator[491]: Ignoring "noauto" for root device
	[  +0.422105] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.174556] systemd-fstab-generator[729]: Ignoring "noauto" for root device
	[  +0.069729] systemd-fstab-generator[740]: Ignoring "noauto" for root device
	[  +0.066761] systemd-fstab-generator[753]: Ignoring "noauto" for root device
	[  +1.220689] systemd-fstab-generator[910]: Ignoring "noauto" for root device
	[  +0.067164] systemd-fstab-generator[921]: Ignoring "noauto" for root device
	[  +0.058616] systemd-fstab-generator[932]: Ignoring "noauto" for root device
	[  +0.062347] systemd-fstab-generator[943]: Ignoring "noauto" for root device
	[  +0.069889] systemd-fstab-generator[971]: Ignoring "noauto" for root device
	[  +2.576383] systemd-fstab-generator[1084]: Ignoring "noauto" for root device
	[  +1.530737] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.580821] systemd-fstab-generator[1404]: Ignoring "noauto" for root device
	[  +5.139726] systemd-fstab-generator[2250]: Ignoring "noauto" for root device
	[Jun15 16:34] kauditd_printk_skb: 41 callbacks suppressed
	[  +2.392909] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +4.125194] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.280200] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.114632] kauditd_printk_skb: 3 callbacks suppressed
	
	* 
	* ==> etcd [41a6909f99a5] <==
	* {"level":"info","ts":"2023-06-15T16:33:43.362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2023-06-15T16:33:43.362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-06-15T16:33:43.362Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-477000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-15T16:33:43.362Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-15T16:33:43.362Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-15T16:33:43.373Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-15T16:33:43.373Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-06-15T16:43:43.957Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":747}
	{"level":"info","ts":"2023-06-15T16:43:43.961Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":747,"took":"2.441695ms","hash":524925281}
	{"level":"info","ts":"2023-06-15T16:43:43.961Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":524925281,"revision":747,"compact-revision":-1}
	{"level":"info","ts":"2023-06-15T16:48:43.971Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":897}
	{"level":"info","ts":"2023-06-15T16:48:43.973Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":897,"took":"1.284283ms","hash":2514030906}
	{"level":"info","ts":"2023-06-15T16:48:43.973Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2514030906,"revision":897,"compact-revision":747}
	{"level":"info","ts":"2023-06-15T16:53:43.979Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1048}
	{"level":"info","ts":"2023-06-15T16:53:43.981Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1048,"took":"857.726µs","hash":834622362}
	{"level":"info","ts":"2023-06-15T16:53:43.981Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":834622362,"revision":1048,"compact-revision":897}
	{"level":"info","ts":"2023-06-15T16:58:43.988Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1198}
	{"level":"info","ts":"2023-06-15T16:58:43.991Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1198,"took":"1.183357ms","hash":1870194874}
	{"level":"info","ts":"2023-06-15T16:58:43.991Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1870194874,"revision":1198,"compact-revision":1048}
	
	* 
	* ==> gcp-auth [6a4bcd8ac64f] <==
	* 2023/06/15 16:34:18 GCP Auth Webhook started!
	
	* 
	* ==> kernel <==
	*  17:01:54 up 28 min,  0 users,  load average: 0.65, 0.55, 0.45
	Linux addons-477000 5.10.57 #1 SMP PREEMPT Wed Jun 14 05:08:37 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [b11fb0f32564] <==
	* I0615 16:34:07.712516       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0615 16:38:44.748912       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:38:44.749316       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:38:44.757965       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:38:44.758323       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:43:44.761094       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:43:44.761179       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:43:44.769254       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:43:44.769477       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:43:44.769594       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:43:44.769676       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:48:44.750393       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:48:44.750930       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:48:44.765734       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:48:44.766097       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:53:44.751144       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:53:44.751426       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:53:44.766204       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:53:44.766395       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:58:44.752713       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:58:44.753513       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:58:44.754419       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:58:44.754518       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:58:44.763858       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:58:44.764268       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [66de98cb24ea] <==
	* I0615 16:34:13.731338       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:13.817171       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:14.755476       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:14.766120       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:14.820829       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:14.823621       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:14.825690       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0615 16:34:14.825754       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:14.826668       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:14.850370       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:15.758931       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:15.761497       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:15.764164       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0615 16:34:15.764226       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:15.766220       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:15.768259       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:29.767346       1 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0615 16:34:29.767460       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0615 16:34:29.868459       1 shared_informer.go:318] Caches are synced for resource quota
	I0615 16:34:30.190420       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0615 16:34:30.296099       1 shared_informer.go:318] Caches are synced for garbage collector
	I0615 16:34:44.034182       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:44.057184       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:45.016712       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:45.039501       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	
	* 
	* ==> kube-proxy [29b72a92c657] <==
	* I0615 16:34:01.157223       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0615 16:34:01.157274       1 server_others.go:110] "Detected node IP" address="192.168.105.2"
	I0615 16:34:01.157290       1 server_others.go:554] "Using iptables proxy"
	I0615 16:34:01.207136       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0615 16:34:01.207158       1 server_others.go:192] "Using iptables Proxier"
	I0615 16:34:01.207188       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0615 16:34:01.207493       1 server.go:658] "Version info" version="v1.27.3"
	I0615 16:34:01.207499       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0615 16:34:01.208029       1 config.go:188] "Starting service config controller"
	I0615 16:34:01.208049       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0615 16:34:01.208060       1 config.go:97] "Starting endpoint slice config controller"
	I0615 16:34:01.208062       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0615 16:34:01.209533       1 config.go:315] "Starting node config controller"
	I0615 16:34:01.209537       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0615 16:34:01.308743       1 shared_informer.go:318] Caches are synced for service config
	I0615 16:34:01.308782       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0615 16:34:01.309993       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [733213e41e3b] <==
	* W0615 16:33:44.753904       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0615 16:33:44.754011       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0615 16:33:44.754034       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0615 16:33:44.754072       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0615 16:33:44.754021       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0615 16:33:44.754081       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0615 16:33:44.754001       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0615 16:33:44.754100       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0615 16:33:44.754136       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0615 16:33:44.754145       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0615 16:33:45.605616       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0615 16:33:45.605673       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0615 16:33:45.647245       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0615 16:33:45.647292       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0615 16:33:45.699650       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0615 16:33:45.699699       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0615 16:33:45.702358       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0615 16:33:45.702403       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0615 16:33:45.718371       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0615 16:33:45.718408       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0615 16:33:45.723261       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0615 16:33:45.723281       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0615 16:33:45.755043       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0615 16:33:45.755066       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0615 16:33:46.350596       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-06-15 16:33:29 UTC, ends at Thu 2023-06-15 17:01:54 UTC. --
	Jun 15 16:56:47 addons-477000 kubelet[2256]: E0615 16:56:47.333470    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 16:56:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 16:56:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 16:56:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 15 16:57:47 addons-477000 kubelet[2256]: E0615 16:57:47.331859    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 16:57:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 16:57:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 16:57:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 15 16:58:47 addons-477000 kubelet[2256]: W0615 16:58:47.318879    2256 machine.go:65] Cannot read vendor id correctly, set empty.
	Jun 15 16:58:47 addons-477000 kubelet[2256]: E0615 16:58:47.332386    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 16:58:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 16:58:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 16:58:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 15 16:59:47 addons-477000 kubelet[2256]: E0615 16:59:47.331594    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 16:59:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 16:59:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 16:59:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 15 17:00:47 addons-477000 kubelet[2256]: E0615 17:00:47.331224    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 17:00:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 17:00:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 17:00:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 15 17:01:47 addons-477000 kubelet[2256]: E0615 17:01:47.337776    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 17:01:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 17:01:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 17:01:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-477000 -n addons-477000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-477000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (0.74s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (480.84s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:329: TestAddons/parallel/InspektorGadget: WARNING: pod list for "gadget" "k8s-app=gadget" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:814: ***** TestAddons/parallel/InspektorGadget: pod "k8s-app=gadget" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:814: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-477000 -n addons-477000
addons_test.go:814: TestAddons/parallel/InspektorGadget: showing logs for failed pods as of 2023-06-15 10:01:52.967043 -0700 PDT m=+1767.251638834
addons_test.go:815: failed waiting for inspektor-gadget pod: k8s-app=gadget within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-477000 -n addons-477000
helpers_test.go:244: <<< TestAddons/parallel/InspektorGadget FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/InspektorGadget]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-477000 logs -n 25
helpers_test.go:252: TestAddons/parallel/InspektorGadget logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-066000 | jenkins | v1.30.1 | 15 Jun 23 09:32 PDT |                     |
	|         | -p download-only-066000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-066000 | jenkins | v1.30.1 | 15 Jun 23 09:32 PDT |                     |
	|         | -p download-only-066000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT | 15 Jun 23 09:33 PDT |
	| delete  | -p download-only-066000        | download-only-066000 | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT | 15 Jun 23 09:33 PDT |
	| delete  | -p download-only-066000        | download-only-066000 | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT | 15 Jun 23 09:33 PDT |
	| start   | --download-only -p             | binary-mirror-062000 | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT |                     |
	|         | binary-mirror-062000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49310         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-062000        | binary-mirror-062000 | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT | 15 Jun 23 09:33 PDT |
	| start   | -p addons-477000               | addons-477000        | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT | 15 Jun 23 09:40 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-477000        | jenkins | v1.30.1 | 15 Jun 23 09:52 PDT |                     |
	|         | addons-477000                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/15 09:33:17
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0615 09:33:17.135905    1397 out.go:296] Setting OutFile to fd 1 ...
	I0615 09:33:17.136030    1397 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 09:33:17.136034    1397 out.go:309] Setting ErrFile to fd 2...
	I0615 09:33:17.136037    1397 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 09:33:17.136120    1397 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 09:33:17.137161    1397 out.go:303] Setting JSON to false
	I0615 09:33:17.152121    1397 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":168,"bootTime":1686846629,"procs":371,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 09:33:17.152202    1397 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 09:33:17.156891    1397 out.go:177] * [addons-477000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 09:33:17.159887    1397 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 09:33:17.163775    1397 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 09:33:17.159984    1397 notify.go:220] Checking for updates...
	I0615 09:33:17.171800    1397 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 09:33:17.174813    1397 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 09:33:17.177828    1397 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 09:33:17.180704    1397 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 09:33:17.183887    1397 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 09:33:17.187819    1397 out.go:177] * Using the qemu2 driver based on user configuration
	I0615 09:33:17.194783    1397 start.go:297] selected driver: qemu2
	I0615 09:33:17.194788    1397 start.go:884] validating driver "qemu2" against <nil>
	I0615 09:33:17.194794    1397 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 09:33:17.196752    1397 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0615 09:33:17.199793    1397 out.go:177] * Automatically selected the socket_vmnet network
	I0615 09:33:17.201307    1397 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 09:33:17.201331    1397 cni.go:84] Creating CNI manager for ""
	I0615 09:33:17.201335    1397 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 09:33:17.201341    1397 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0615 09:33:17.201349    1397 start_flags.go:319] config:
	{Name:addons-477000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-477000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 09:33:17.201434    1397 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 09:33:17.209829    1397 out.go:177] * Starting control plane node addons-477000 in cluster addons-477000
	I0615 09:33:17.213726    1397 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 09:33:17.213749    1397 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 09:33:17.213768    1397 cache.go:57] Caching tarball of preloaded images
	I0615 09:33:17.213824    1397 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 09:33:17.213830    1397 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 09:33:17.214051    1397 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/config.json ...
	I0615 09:33:17.214063    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/config.json: {Name:mkc1c34b82952aae697463d2d78c6ea098445790 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:17.214292    1397 start.go:365] acquiring machines lock for addons-477000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 09:33:17.214399    1397 start.go:369] acquired machines lock for "addons-477000" in 101.583µs
	I0615 09:33:17.214409    1397 start.go:93] Provisioning new machine with config: &{Name:addons-477000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.3 ClusterName:addons-477000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 09:33:17.214436    1397 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 09:33:17.221743    1397 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0615 09:33:17.568031    1397 start.go:159] libmachine.API.Create for "addons-477000" (driver="qemu2")
	I0615 09:33:17.568071    1397 client.go:168] LocalClient.Create starting
	I0615 09:33:17.568226    1397 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 09:33:17.626803    1397 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 09:33:17.737973    1397 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 09:33:17.902563    1397 main.go:141] libmachine: Creating SSH key...
	I0615 09:33:17.968617    1397 main.go:141] libmachine: Creating Disk image...
	I0615 09:33:17.968623    1397 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 09:33:17.969817    1397 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2
	I0615 09:33:18.004891    1397 main.go:141] libmachine: STDOUT: 
	I0615 09:33:18.004923    1397 main.go:141] libmachine: STDERR: 
	I0615 09:33:18.004982    1397 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2 +20000M
	I0615 09:33:18.012411    1397 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 09:33:18.012436    1397 main.go:141] libmachine: STDERR: 
	I0615 09:33:18.012455    1397 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2
	I0615 09:33:18.012466    1397 main.go:141] libmachine: Starting QEMU VM...
	I0615 09:33:18.012501    1397 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:25:cc:0f:2e:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2
	I0615 09:33:18.081537    1397 main.go:141] libmachine: STDOUT: 
	I0615 09:33:18.081558    1397 main.go:141] libmachine: STDERR: 
	I0615 09:33:18.081562    1397 main.go:141] libmachine: Attempt 0
	I0615 09:33:18.081577    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:20.083712    1397 main.go:141] libmachine: Attempt 1
	I0615 09:33:20.083961    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:22.086122    1397 main.go:141] libmachine: Attempt 2
	I0615 09:33:22.086166    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:24.088201    1397 main.go:141] libmachine: Attempt 3
	I0615 09:33:24.088224    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:26.090268    1397 main.go:141] libmachine: Attempt 4
	I0615 09:33:26.090325    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:28.092361    1397 main.go:141] libmachine: Attempt 5
	I0615 09:33:28.092379    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:30.094488    1397 main.go:141] libmachine: Attempt 6
	I0615 09:33:30.094575    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:30.094985    1397 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0615 09:33:30.095099    1397 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:25:cc:f:2e:6f ID:1,1a:25:cc:f:2e:6f Lease:0x648c8ed9}
	I0615 09:33:30.095124    1397 main.go:141] libmachine: Found match: 1a:25:cc:f:2e:6f
	I0615 09:33:30.095168    1397 main.go:141] libmachine: IP: 192.168.105.2
	I0615 09:33:30.095195    1397 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0615 09:33:32.115264    1397 machine.go:88] provisioning docker machine ...
	I0615 09:33:32.115338    1397 buildroot.go:166] provisioning hostname "addons-477000"
	I0615 09:33:32.116828    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.117588    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.117607    1397 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-477000 && echo "addons-477000" | sudo tee /etc/hostname
	I0615 09:33:32.199158    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-477000
	
	I0615 09:33:32.199283    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.199748    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.199763    1397 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-477000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-477000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-477000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0615 09:33:32.260846    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0615 09:33:32.260864    1397 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16718-868/.minikube CaCertPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16718-868/.minikube}
	I0615 09:33:32.260878    1397 buildroot.go:174] setting up certificates
	I0615 09:33:32.260906    1397 provision.go:83] configureAuth start
	I0615 09:33:32.260912    1397 provision.go:138] copyHostCerts
	I0615 09:33:32.261103    1397 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16718-868/.minikube/ca.pem (1078 bytes)
	I0615 09:33:32.261436    1397 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16718-868/.minikube/cert.pem (1123 bytes)
	I0615 09:33:32.262101    1397 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16718-868/.minikube/key.pem (1679 bytes)
	I0615 09:33:32.262442    1397 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16718-868/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca-key.pem org=jenkins.addons-477000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-477000]
	I0615 09:33:32.306279    1397 provision.go:172] copyRemoteCerts
	I0615 09:33:32.306343    1397 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0615 09:33:32.306360    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:33:32.335305    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0615 09:33:32.343471    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0615 09:33:32.351180    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0615 09:33:32.358490    1397 provision.go:86] duration metric: configureAuth took 97.576167ms
	I0615 09:33:32.358498    1397 buildroot.go:189] setting minikube options for container-runtime
	I0615 09:33:32.358950    1397 config.go:182] Loaded profile config "addons-477000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 09:33:32.358995    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.359216    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.359220    1397 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0615 09:33:32.410196    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0615 09:33:32.410204    1397 buildroot.go:70] root file system type: tmpfs
	I0615 09:33:32.410261    1397 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0615 09:33:32.410301    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.410550    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.410587    1397 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0615 09:33:32.468329    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0615 09:33:32.468380    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.468634    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.468643    1397 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0615 09:33:32.794674    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0615 09:33:32.794698    1397 machine.go:91] provisioned docker machine in 679.423792ms
	I0615 09:33:32.794704    1397 client.go:171] LocalClient.Create took 15.226996125s
	I0615 09:33:32.794723    1397 start.go:167] duration metric: libmachine.API.Create for "addons-477000" took 15.227064791s
	I0615 09:33:32.794726    1397 start.go:300] post-start starting for "addons-477000" (driver="qemu2")
	I0615 09:33:32.794731    1397 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0615 09:33:32.794812    1397 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0615 09:33:32.794822    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:33:32.823879    1397 ssh_runner.go:195] Run: cat /etc/os-release
	I0615 09:33:32.825122    1397 info.go:137] Remote host: Buildroot 2021.02.12
	I0615 09:33:32.825128    1397 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16718-868/.minikube/addons for local assets ...
	I0615 09:33:32.825196    1397 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16718-868/.minikube/files for local assets ...
	I0615 09:33:32.825222    1397 start.go:303] post-start completed in 30.494125ms
	I0615 09:33:32.825555    1397 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/config.json ...
	I0615 09:33:32.825707    1397 start.go:128] duration metric: createHost completed in 15.611646375s
	I0615 09:33:32.825734    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.825947    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.825951    1397 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0615 09:33:32.876753    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: 1686846813.320713543
	
	I0615 09:33:32.876760    1397 fix.go:206] guest clock: 1686846813.320713543
	I0615 09:33:32.876764    1397 fix.go:219] Guest: 2023-06-15 09:33:33.320713543 -0700 PDT Remote: 2023-06-15 09:33:32.825711 -0700 PDT m=+15.708594751 (delta=495.002543ms)
	I0615 09:33:32.876775    1397 fix.go:190] guest clock delta is within tolerance: 495.002543ms
	I0615 09:33:32.876778    1397 start.go:83] releasing machines lock for "addons-477000", held for 15.662753208s
	I0615 09:33:32.877060    1397 ssh_runner.go:195] Run: cat /version.json
	I0615 09:33:32.877067    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:33:32.877085    1397 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0615 09:33:32.877121    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:33:32.951587    1397 ssh_runner.go:195] Run: systemctl --version
	I0615 09:33:32.953983    1397 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0615 09:33:32.956008    1397 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0615 09:33:32.956040    1397 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0615 09:33:32.961754    1397 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0615 09:33:32.961761    1397 start.go:466] detecting cgroup driver to use...
	I0615 09:33:32.961877    1397 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0615 09:33:32.967359    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0615 09:33:32.970783    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0615 09:33:32.973872    1397 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0615 09:33:32.973908    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0615 09:33:32.976794    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0615 09:33:32.979871    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0615 09:33:32.983273    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0615 09:33:32.986847    1397 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0615 09:33:32.990009    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0615 09:33:32.992910    1397 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0615 09:33:32.995885    1397 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0615 09:33:32.999181    1397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 09:33:33.082046    1397 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0615 09:33:33.090305    1397 start.go:466] detecting cgroup driver to use...
	I0615 09:33:33.090367    1397 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0615 09:33:33.095444    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0615 09:33:33.099628    1397 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0615 09:33:33.106008    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0615 09:33:33.110583    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0615 09:33:33.115305    1397 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0615 09:33:33.157221    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0615 09:33:33.165685    1397 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0615 09:33:33.171700    1397 ssh_runner.go:195] Run: which cri-dockerd
	I0615 09:33:33.173347    1397 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0615 09:33:33.176671    1397 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0615 09:33:33.184036    1397 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0615 09:33:33.256172    1397 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0615 09:33:33.326477    1397 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0615 09:33:33.326492    1397 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0615 09:33:33.331797    1397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 09:33:33.394602    1397 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0615 09:33:34.551420    1397 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.1568305s)
	I0615 09:33:34.551480    1397 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0615 09:33:34.614918    1397 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0615 09:33:34.680379    1397 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0615 09:33:34.741670    1397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 09:33:34.802995    1397 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0615 09:33:34.810702    1397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 09:33:34.876193    1397 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0615 09:33:34.899281    1397 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0615 09:33:34.899375    1397 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0615 09:33:34.902005    1397 start.go:534] Will wait 60s for crictl version
	I0615 09:33:34.902039    1397 ssh_runner.go:195] Run: which crictl
	I0615 09:33:34.903665    1397 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0615 09:33:34.922827    1397 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0615 09:33:34.922910    1397 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0615 09:33:34.936535    1397 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0615 09:33:34.948006    1397 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.2 ...
	I0615 09:33:34.948101    1397 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0615 09:33:34.949468    1397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0615 09:33:34.953059    1397 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 09:33:34.953103    1397 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0615 09:33:34.958178    1397 docker.go:636] Got preloaded images: 
	I0615 09:33:34.958185    1397 docker.go:642] registry.k8s.io/kube-apiserver:v1.27.3 wasn't preloaded
	I0615 09:33:34.958223    1397 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0615 09:33:34.960919    1397 ssh_runner.go:195] Run: which lz4
	I0615 09:33:34.962156    1397 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0615 09:33:34.963566    1397 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0615 09:33:34.963580    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635922 bytes)
	I0615 09:33:36.258372    1397 docker.go:600] Took 1.296282 seconds to copy over tarball
	I0615 09:33:36.258440    1397 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0615 09:33:37.363016    1397 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.104582708s)
	I0615 09:33:37.363034    1397 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0615 09:33:37.379849    1397 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0615 09:33:37.383479    1397 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0615 09:33:37.388752    1397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 09:33:37.449408    1397 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0615 09:33:38.998025    1397 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.548624125s)
	I0615 09:33:38.998130    1397 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0615 09:33:39.004063    1397 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0615 09:33:39.004075    1397 cache_images.go:84] Images are preloaded, skipping loading
	I0615 09:33:39.004149    1397 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0615 09:33:39.011968    1397 cni.go:84] Creating CNI manager for ""
	I0615 09:33:39.011977    1397 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 09:33:39.012000    1397 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0615 09:33:39.012011    1397 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-477000 NodeName:addons-477000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0615 09:33:39.012111    1397 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-477000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0615 09:33:39.012156    1397 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-477000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:addons-477000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0615 09:33:39.012203    1397 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0615 09:33:39.015681    1397 binaries.go:44] Found k8s binaries, skipping transfer
	I0615 09:33:39.015718    1397 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0615 09:33:39.018849    1397 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0615 09:33:39.023920    1397 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0615 09:33:39.028733    1397 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0615 09:33:39.033571    1397 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0615 09:33:39.034818    1397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0615 09:33:39.038909    1397 certs.go:56] Setting up /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000 for IP: 192.168.105.2
	I0615 09:33:39.038918    1397 certs.go:190] acquiring lock for shared ca certs: {Name:mk9ee4d7ca68f2cc32c8609d33f6ce33c43a91d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.039073    1397 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key
	I0615 09:33:39.109209    1397 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt ...
	I0615 09:33:39.109214    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt: {Name:mka7538e8370ad0560f47e28d206b077e2dbbef8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.109425    1397 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key ...
	I0615 09:33:39.109428    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key: {Name:mkca6c7de675216938ac1a6663738af412e2d280 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.109532    1397 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key
	I0615 09:33:39.219574    1397 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.crt ...
	I0615 09:33:39.219577    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.crt: {Name:mk21a595039c96735254391e5270364a73e52306 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.219709    1397 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key ...
	I0615 09:33:39.219712    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key: {Name:mk96cab9f1987887c2b313cd365bdba518ec818b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.219826    1397 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.key
	I0615 09:33:39.219831    1397 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt with IP's: []
	I0615 09:33:39.435828    1397 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt ...
	I0615 09:33:39.435835    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt: {Name:mk0f2105a4c5fdba007e9c77c7945365dc3f96af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.436029    1397 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.key ...
	I0615 09:33:39.436031    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.key: {Name:mk65e491f4b4c1ee8d05045efb9265b2c697a551 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.436124    1397 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key.96055969
	I0615 09:33:39.436133    1397 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0615 09:33:39.510125    1397 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt.96055969 ...
	I0615 09:33:39.510129    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt.96055969: {Name:mk7c90d062166950585957cb3f0ce136594c9cd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.510277    1397 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key.96055969 ...
	I0615 09:33:39.510280    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key.96055969: {Name:mk4266445b8f2d5bc078d169ee24b8765955e2f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.510384    1397 certs.go:337] copying /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt
	I0615 09:33:39.510598    1397 certs.go:341] copying /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key
	I0615 09:33:39.510713    1397 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.key
	I0615 09:33:39.510723    1397 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.crt with IP's: []
	I0615 09:33:39.610633    1397 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.crt ...
	I0615 09:33:39.610637    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.crt: {Name:mk04e06c13fe3eccffb62f328096a02f5668baa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.610779    1397 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.key ...
	I0615 09:33:39.610783    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.key: {Name:mk21e3c8e84fcac9a2d9da5e0fa06b26ad1ee7cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.611042    1397 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca-key.pem (1679 bytes)
	I0615 09:33:39.611072    1397 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem (1078 bytes)
	I0615 09:33:39.611094    1397 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem (1123 bytes)
	I0615 09:33:39.611429    1397 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/key.pem (1679 bytes)
	I0615 09:33:39.611960    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0615 09:33:39.619546    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0615 09:33:39.626711    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0615 09:33:39.633501    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0615 09:33:39.640010    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0615 09:33:39.647112    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0615 09:33:39.654063    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0615 09:33:39.660582    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0615 09:33:39.667533    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0615 09:33:39.674545    1397 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0615 09:33:39.680339    1397 ssh_runner.go:195] Run: openssl version
	I0615 09:33:39.682379    1397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0615 09:33:39.685404    1397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0615 09:33:39.686832    1397 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 15 16:33 /usr/share/ca-certificates/minikubeCA.pem
	I0615 09:33:39.686855    1397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0615 09:33:39.688703    1397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0615 09:33:39.691957    1397 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0615 09:33:39.693381    1397 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0615 09:33:39.693418    1397 kubeadm.go:404] StartCluster: {Name:addons-477000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.3 ClusterName:addons-477000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 09:33:39.693485    1397 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0615 09:33:39.699291    1397 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0615 09:33:39.702928    1397 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0615 09:33:39.706168    1397 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0615 09:33:39.708902    1397 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0615 09:33:39.708925    1397 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0615 09:33:39.731022    1397 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0615 09:33:39.731051    1397 kubeadm.go:322] [preflight] Running pre-flight checks
	I0615 09:33:39.787198    1397 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0615 09:33:39.787252    1397 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0615 09:33:39.787291    1397 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0615 09:33:39.845524    1397 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0615 09:33:39.853720    1397 out.go:204]   - Generating certificates and keys ...
	I0615 09:33:39.853771    1397 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0615 09:33:39.853800    1397 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0615 09:33:40.047052    1397 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0615 09:33:40.281668    1397 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0615 09:33:40.373604    1397 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0615 09:33:40.496002    1397 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0615 09:33:40.752895    1397 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0615 09:33:40.752975    1397 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-477000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0615 09:33:40.889354    1397 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0615 09:33:40.889424    1397 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-477000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0615 09:33:40.967392    1397 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0615 09:33:41.132487    1397 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0615 09:33:41.175551    1397 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0615 09:33:41.175583    1397 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0615 09:33:41.275708    1397 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0615 09:33:41.313261    1397 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0615 09:33:41.394612    1397 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0615 09:33:41.488793    1397 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0615 09:33:41.495623    1397 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0615 09:33:41.495672    1397 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0615 09:33:41.495691    1397 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0615 09:33:41.565044    1397 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0615 09:33:41.570236    1397 out.go:204]   - Booting up control plane ...
	I0615 09:33:41.570302    1397 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0615 09:33:41.570344    1397 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0615 09:33:41.570389    1397 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0615 09:33:41.570430    1397 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0615 09:33:41.570514    1397 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0615 09:33:45.571765    1397 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.003295 seconds
	I0615 09:33:45.571857    1397 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0615 09:33:45.577408    1397 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0615 09:33:46.094757    1397 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0615 09:33:46.095006    1397 kubeadm.go:322] [mark-control-plane] Marking the node addons-477000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0615 09:33:46.613385    1397 kubeadm.go:322] [bootstrap-token] Using token: f4kg8y.q60xaa2tn5uwspbb
	I0615 09:33:46.619341    1397 out.go:204]   - Configuring RBAC rules ...
	I0615 09:33:46.619403    1397 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0615 09:33:46.620813    1397 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0615 09:33:46.624663    1397 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0615 09:33:46.625913    1397 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0615 09:33:46.627185    1397 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0615 09:33:46.628306    1397 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0615 09:33:46.632523    1397 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0615 09:33:46.806353    1397 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0615 09:33:47.022461    1397 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0615 09:33:47.022733    1397 kubeadm.go:322] 
	I0615 09:33:47.022774    1397 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0615 09:33:47.022780    1397 kubeadm.go:322] 
	I0615 09:33:47.022834    1397 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0615 09:33:47.022839    1397 kubeadm.go:322] 
	I0615 09:33:47.022851    1397 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0615 09:33:47.022879    1397 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0615 09:33:47.022912    1397 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0615 09:33:47.022916    1397 kubeadm.go:322] 
	I0615 09:33:47.022952    1397 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0615 09:33:47.022958    1397 kubeadm.go:322] 
	I0615 09:33:47.022992    1397 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0615 09:33:47.022995    1397 kubeadm.go:322] 
	I0615 09:33:47.023016    1397 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0615 09:33:47.023050    1397 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0615 09:33:47.023081    1397 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0615 09:33:47.023083    1397 kubeadm.go:322] 
	I0615 09:33:47.023121    1397 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0615 09:33:47.023158    1397 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0615 09:33:47.023161    1397 kubeadm.go:322] 
	I0615 09:33:47.023197    1397 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token f4kg8y.q60xaa2tn5uwspbb \
	I0615 09:33:47.023261    1397 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:13ec1e4f42825d0267b189c0f8b830eda54e60d63681b653b209256c58602b59 \
	I0615 09:33:47.023273    1397 kubeadm.go:322] 	--control-plane 
	I0615 09:33:47.023278    1397 kubeadm.go:322] 
	I0615 09:33:47.023320    1397 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0615 09:33:47.023326    1397 kubeadm.go:322] 
	I0615 09:33:47.023380    1397 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token f4kg8y.q60xaa2tn5uwspbb \
	I0615 09:33:47.023443    1397 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:13ec1e4f42825d0267b189c0f8b830eda54e60d63681b653b209256c58602b59 
	I0615 09:33:47.023525    1397 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0615 09:33:47.023586    1397 cni.go:84] Creating CNI manager for ""
	I0615 09:33:47.023594    1397 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 09:33:47.031274    1397 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0615 09:33:47.035321    1397 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0615 09:33:47.038799    1397 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0615 09:33:47.043709    1397 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0615 09:33:47.043747    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:47.043800    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=1814abc5e40384accb8747bfb7e33027343c9627 minikube.k8s.io/name=addons-477000 minikube.k8s.io/updated_at=2023_06_15T09_33_47_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:47.095849    1397 ops.go:34] apiserver oom_adj: -16
	I0615 09:33:47.095898    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:47.645147    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:48.145079    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:48.645093    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:49.145044    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:49.645148    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:50.145134    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:50.645328    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:51.145310    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:51.645116    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:52.144609    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:52.645278    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:53.145243    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:53.645239    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:54.145000    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:54.644744    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:55.145233    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:55.644949    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:56.145008    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:56.644938    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:57.143430    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:57.645224    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:58.144898    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:58.644909    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:59.144773    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:59.644338    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:34:00.144834    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:34:00.206574    1397 kubeadm.go:1081] duration metric: took 13.163176875s to wait for elevateKubeSystemPrivileges.
	I0615 09:34:00.206587    1397 kubeadm.go:406] StartCluster complete in 20.513668625s
	I0615 09:34:00.206614    1397 settings.go:142] acquiring lock: {Name:mk45a698fcd8dd8ae6984c9cf4ad4d183fdb5424 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:34:00.206769    1397 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 09:34:00.206961    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/kubeconfig: {Name:mkbe9cac04fb467055323f2e3d5db2c6ddc287ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:34:00.207185    1397 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0615 09:34:00.207249    1397 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0615 09:34:00.207317    1397 addons.go:66] Setting ingress=true in profile "addons-477000"
	I0615 09:34:00.207322    1397 addons.go:66] Setting ingress-dns=true in profile "addons-477000"
	I0615 09:34:00.207325    1397 addons.go:228] Setting addon ingress=true in "addons-477000"
	I0615 09:34:00.207327    1397 addons.go:228] Setting addon ingress-dns=true in "addons-477000"
	I0615 09:34:00.207355    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207358    1397 addons.go:66] Setting cloud-spanner=true in profile "addons-477000"
	I0615 09:34:00.207362    1397 addons.go:228] Setting addon cloud-spanner=true in "addons-477000"
	I0615 09:34:00.207371    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207402    1397 addons.go:66] Setting metrics-server=true in profile "addons-477000"
	I0615 09:34:00.207421    1397 addons.go:66] Setting registry=true in profile "addons-477000"
	I0615 09:34:00.207457    1397 addons.go:228] Setting addon registry=true in "addons-477000"
	I0615 09:34:00.207434    1397 addons.go:66] Setting inspektor-gadget=true in profile "addons-477000"
	I0615 09:34:00.207483    1397 addons.go:228] Setting addon inspektor-gadget=true in "addons-477000"
	I0615 09:34:00.207494    1397 addons.go:228] Setting addon metrics-server=true in "addons-477000"
	I0615 09:34:00.207502    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207355    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207433    1397 addons.go:66] Setting default-storageclass=true in profile "addons-477000"
	I0615 09:34:00.207531    1397 config.go:182] Loaded profile config "addons-477000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 09:34:00.207537    1397 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-477000"
	I0615 09:34:00.207575    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207475    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207436    1397 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-477000"
	I0615 09:34:00.207676    1397 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-477000"
	I0615 09:34:00.207687    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207317    1397 addons.go:66] Setting volumesnapshots=true in profile "addons-477000"
	I0615 09:34:00.207735    1397 addons.go:228] Setting addon volumesnapshots=true in "addons-477000"
	I0615 09:34:00.207746    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207438    1397 addons.go:66] Setting gcp-auth=true in profile "addons-477000"
	I0615 09:34:00.207776    1397 mustload.go:65] Loading cluster: addons-477000
	I0615 09:34:00.207433    1397 addons.go:66] Setting storage-provisioner=true in profile "addons-477000"
	I0615 09:34:00.208143    1397 addons.go:228] Setting addon storage-provisioner=true in "addons-477000"
	I0615 09:34:00.208157    1397 host.go:66] Checking if "addons-477000" exists ...
	W0615 09:34:00.208299    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208325    1397 addons.go:274] "addons-477000" is not running, setting registry=true and skipping enablement (err=<nil>)
	W0615 09:34:00.208331    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208338    1397 addons.go:274] "addons-477000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	I0615 09:34:00.208333    1397 addons.go:464] Verifying addon registry=true in "addons-477000"
	I0615 09:34:00.211629    1397 out.go:177] * Verifying registry addon...
	W0615 09:34:00.208373    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	I0615 09:34:00.207993    1397 config.go:182] Loaded profile config "addons-477000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	W0615 09:34:00.208397    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208133    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208437    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208601    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208662    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	I0615 09:34:00.215730    1397 addons.go:228] Setting addon default-storageclass=true in "addons-477000"
	W0615 09:34:00.218550    1397 addons.go:274] "addons-477000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	W0615 09:34:00.218559    1397 addons.go:274] "addons-477000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	W0615 09:34:00.218576    1397 addons.go:274] "addons-477000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	W0615 09:34:00.218576    1397 addons.go:274] "addons-477000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	W0615 09:34:00.218580    1397 addons.go:274] "addons-477000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	W0615 09:34:00.218593    1397 addons.go:274] "addons-477000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	I0615 09:34:00.218944    1397 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0615 09:34:00.219273    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.221576    1397 addons.go:464] Verifying addon metrics-server=true in "addons-477000"
	I0615 09:34:00.221583    1397 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0615 09:34:00.224623    1397 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0615 09:34:00.224631    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0615 09:34:00.224638    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:34:00.221629    1397 addons.go:464] Verifying addon ingress=true in "addons-477000"
	I0615 09:34:00.221636    1397 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-477000"
	I0615 09:34:00.221723    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.228543    1397 out.go:177] * Verifying ingress addon...
	I0615 09:34:00.238557    1397 out.go:177] * Verifying csi-hostpath-driver addon...
	I0615 09:34:00.229248    1397 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0615 09:34:00.235988    1397 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0615 09:34:00.241352    1397 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0615 09:34:00.242560    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0615 09:34:00.242594    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:34:00.242973    1397 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0615 09:34:00.245385    1397 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0615 09:34:00.251897    1397 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0615 09:34:00.275682    1397 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0615 09:34:00.278596    1397 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0615 09:34:00.278604    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0615 09:34:00.310787    1397 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0615 09:34:00.310799    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0615 09:34:00.340750    1397 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0615 09:34:00.340763    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0615 09:34:00.370147    1397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0615 09:34:00.374756    1397 addons.go:420] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0615 09:34:00.374766    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0615 09:34:00.391483    1397 addons.go:420] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0615 09:34:00.391493    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0615 09:34:00.396735    1397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0615 09:34:00.725129    1397 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-477000" context rescaled to 1 replicas
	I0615 09:34:00.725155    1397 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 09:34:00.731827    1397 out.go:177] * Verifying Kubernetes components...
	I0615 09:34:00.735986    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0615 09:34:01.130555    1397 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	W0615 09:34:01.273320    1397 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0615 09:34:01.273347    1397 retry.go:31] will retry after 358.412085ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0615 09:34:01.273795    1397 node_ready.go:35] waiting up to 6m0s for node "addons-477000" to be "Ready" ...
	I0615 09:34:01.275272    1397 node_ready.go:49] node "addons-477000" has status "Ready":"True"
	I0615 09:34:01.275281    1397 node_ready.go:38] duration metric: took 1.477792ms waiting for node "addons-477000" to be "Ready" ...
	I0615 09:34:01.275284    1397 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0615 09:34:01.279498    1397 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-c5j9m" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:01.633151    1397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0615 09:34:02.299497    1397 pod_ready.go:92] pod "coredns-5d78c9869d-c5j9m" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.299518    1397 pod_ready.go:81] duration metric: took 1.020034208s waiting for pod "coredns-5d78c9869d-c5j9m" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.299526    1397 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-mds5s" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.303736    1397 pod_ready.go:92] pod "coredns-5d78c9869d-mds5s" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.303743    1397 pod_ready.go:81] duration metric: took 4.212458ms waiting for pod "coredns-5d78c9869d-mds5s" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.303749    1397 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.307230    1397 pod_ready.go:92] pod "etcd-addons-477000" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.307237    1397 pod_ready.go:81] duration metric: took 3.484042ms waiting for pod "etcd-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.307243    1397 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.311004    1397 pod_ready.go:92] pod "kube-apiserver-addons-477000" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.311013    1397 pod_ready.go:81] duration metric: took 3.766916ms waiting for pod "kube-apiserver-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.311019    1397 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.481809    1397 pod_ready.go:92] pod "kube-controller-manager-addons-477000" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.481828    1397 pod_ready.go:81] duration metric: took 170.807958ms waiting for pod "kube-controller-manager-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.481838    1397 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8rgcs" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.883885    1397 pod_ready.go:92] pod "kube-proxy-8rgcs" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.883919    1397 pod_ready.go:81] duration metric: took 402.082375ms waiting for pod "kube-proxy-8rgcs" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.883933    1397 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:03.277736    1397 pod_ready.go:92] pod "kube-scheduler-addons-477000" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:03.277748    1397 pod_ready.go:81] duration metric: took 393.817875ms waiting for pod "kube-scheduler-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:03.277754    1397 pod_ready.go:38] duration metric: took 2.002511417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0615 09:34:03.277768    1397 api_server.go:52] waiting for apiserver process to appear ...
	I0615 09:34:03.277845    1397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0615 09:34:03.986804    1397 api_server.go:72] duration metric: took 3.261712416s to wait for apiserver process to appear ...
	I0615 09:34:03.986816    1397 api_server.go:88] waiting for apiserver healthz status ...
	I0615 09:34:03.986824    1397 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0615 09:34:03.986882    1397 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.353767917s)
	I0615 09:34:03.990093    1397 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0615 09:34:03.990734    1397 api_server.go:141] control plane version: v1.27.3
	I0615 09:34:03.990742    1397 api_server.go:131] duration metric: took 3.923291ms to wait for apiserver health ...
	I0615 09:34:03.990745    1397 system_pods.go:43] waiting for kube-system pods to appear ...
	I0615 09:34:03.993833    1397 system_pods.go:59] 9 kube-system pods found
	I0615 09:34:03.993840    1397 system_pods.go:61] "coredns-5d78c9869d-c5j9m" [029d925c-5975-4103-a809-e22de9cdf1be] Running
	I0615 09:34:03.993843    1397 system_pods.go:61] "coredns-5d78c9869d-mds5s" [6851e0cd-a4cb-4c11-8b37-6e07dedcb90e] Running
	I0615 09:34:03.993845    1397 system_pods.go:61] "etcd-addons-477000" [41610910-d77e-4a7d-8f06-bcf205880f06] Running
	I0615 09:34:03.993848    1397 system_pods.go:61] "kube-apiserver-addons-477000" [49f23173-0ac0-4711-9c55-a96c00aa0881] Running
	I0615 09:34:03.993851    1397 system_pods.go:61] "kube-controller-manager-addons-477000" [befaa390-5c21-4b4a-930f-d4c4c3559d3d] Running
	I0615 09:34:03.993853    1397 system_pods.go:61] "kube-proxy-8rgcs" [6b460d21-5a79-40c6-84a1-e0551f8e91b9] Running
	I0615 09:34:03.993855    1397 system_pods.go:61] "kube-scheduler-addons-477000" [995baab0-4f21-4690-baf3-4c4682320ca6] Running
	I0615 09:34:03.993859    1397 system_pods.go:61] "snapshot-controller-75bbb956b9-p6hk4" [f6c07a51-e0a4-4ed6-a2fb-f927be1aa9f7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0615 09:34:03.993864    1397 system_pods.go:61] "snapshot-controller-75bbb956b9-prqv8" [368ee712-d2ca-4c1b-9d87-bffa3c354d7c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0615 09:34:03.993866    1397 system_pods.go:74] duration metric: took 3.119166ms to wait for pod list to return data ...
	I0615 09:34:03.993869    1397 default_sa.go:34] waiting for default service account to be created ...
	I0615 09:34:03.995049    1397 default_sa.go:45] found service account: "default"
	I0615 09:34:03.995055    1397 default_sa.go:55] duration metric: took 1.183708ms for default service account to be created ...
	I0615 09:34:03.995057    1397 system_pods.go:116] waiting for k8s-apps to be running ...
	I0615 09:34:03.998400    1397 system_pods.go:86] 9 kube-system pods found
	I0615 09:34:03.998409    1397 system_pods.go:89] "coredns-5d78c9869d-c5j9m" [029d925c-5975-4103-a809-e22de9cdf1be] Running
	I0615 09:34:03.998411    1397 system_pods.go:89] "coredns-5d78c9869d-mds5s" [6851e0cd-a4cb-4c11-8b37-6e07dedcb90e] Running
	I0615 09:34:03.998414    1397 system_pods.go:89] "etcd-addons-477000" [41610910-d77e-4a7d-8f06-bcf205880f06] Running
	I0615 09:34:03.998416    1397 system_pods.go:89] "kube-apiserver-addons-477000" [49f23173-0ac0-4711-9c55-a96c00aa0881] Running
	I0615 09:34:03.998419    1397 system_pods.go:89] "kube-controller-manager-addons-477000" [befaa390-5c21-4b4a-930f-d4c4c3559d3d] Running
	I0615 09:34:03.998421    1397 system_pods.go:89] "kube-proxy-8rgcs" [6b460d21-5a79-40c6-84a1-e0551f8e91b9] Running
	I0615 09:34:03.998424    1397 system_pods.go:89] "kube-scheduler-addons-477000" [995baab0-4f21-4690-baf3-4c4682320ca6] Running
	I0615 09:34:03.998429    1397 system_pods.go:89] "snapshot-controller-75bbb956b9-p6hk4" [f6c07a51-e0a4-4ed6-a2fb-f927be1aa9f7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0615 09:34:03.998433    1397 system_pods.go:89] "snapshot-controller-75bbb956b9-prqv8" [368ee712-d2ca-4c1b-9d87-bffa3c354d7c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0615 09:34:03.998436    1397 system_pods.go:126] duration metric: took 3.376208ms to wait for k8s-apps to be running ...
	I0615 09:34:03.998439    1397 system_svc.go:44] waiting for kubelet service to be running ....
	I0615 09:34:03.998489    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0615 09:34:04.003913    1397 system_svc.go:56] duration metric: took 5.471458ms WaitForService to wait for kubelet.
	I0615 09:34:04.003921    1397 kubeadm.go:581] duration metric: took 3.278833625s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0615 09:34:04.003932    1397 node_conditions.go:102] verifying NodePressure condition ...
	I0615 09:34:04.077208    1397 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0615 09:34:04.077239    1397 node_conditions.go:123] node cpu capacity is 2
	I0615 09:34:04.077244    1397 node_conditions.go:105] duration metric: took 73.311333ms to run NodePressure ...
	I0615 09:34:04.077249    1397 start.go:228] waiting for startup goroutines ...
	I0615 09:34:06.831960    1397 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0615 09:34:06.832053    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:34:06.882622    1397 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0615 09:34:06.891297    1397 addons.go:228] Setting addon gcp-auth=true in "addons-477000"
	I0615 09:34:06.891339    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:06.892599    1397 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0615 09:34:06.892612    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:34:06.928262    1397 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0615 09:34:06.932997    1397 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0615 09:34:06.937187    1397 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0615 09:34:06.937194    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0615 09:34:06.943495    1397 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0615 09:34:06.943502    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0615 09:34:06.949337    1397 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0615 09:34:06.949343    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0615 09:34:06.954968    1397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0615 09:34:07.475178    1397 addons.go:464] Verifying addon gcp-auth=true in "addons-477000"
	I0615 09:34:07.478304    1397 out.go:177] * Verifying gcp-auth addon...
	I0615 09:34:07.485666    1397 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0615 09:34:07.491991    1397 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0615 09:34:07.492002    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:07.996710    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:08.496921    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:09.002133    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:09.494606    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:09.995080    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:10.495704    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:10.995530    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:11.495877    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:12.001470    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:12.497446    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:13.001473    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:13.502268    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:13.997362    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:14.503184    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:14.997798    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:15.495991    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:16.000278    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:16.501895    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:17.001719    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:17.495416    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:17.995757    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:18.496835    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:19.002478    1397 kapi.go:107] duration metric: took 11.51706925s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0615 09:34:19.008171    1397 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-477000 cluster.
	I0615 09:34:19.011889    1397 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0615 09:34:19.016120    1397 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0615 09:40:00.215049    1397 kapi.go:107] duration metric: took 6m0.00478975s to wait for kubernetes.io/minikube-addons=registry ...
	W0615 09:40:00.215455    1397 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0615 09:40:00.235888    1397 kapi.go:107] duration metric: took 6m0.001641708s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0615 09:40:00.235937    1397 kapi.go:107] duration metric: took 6m0.008679083s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0615 09:40:00.236028    1397 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	W0615 09:40:00.236088    1397 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0615 09:40:00.243957    1397 out.go:177] * Enabled addons: inspektor-gadget, metrics-server, cloud-spanner, ingress-dns, storage-provisioner, default-storageclass, volumesnapshots, gcp-auth
	I0615 09:40:00.250970    1397 addons.go:499] enable addons completed in 6m0.052449292s: enabled=[inspektor-gadget metrics-server cloud-spanner ingress-dns storage-provisioner default-storageclass volumesnapshots gcp-auth]
	I0615 09:40:00.251042    1397 start.go:233] waiting for cluster config update ...
	I0615 09:40:00.251069    1397 start.go:242] writing updated cluster config ...
	I0615 09:40:00.255738    1397 ssh_runner.go:195] Run: rm -f paused
	I0615 09:40:00.403218    1397 start.go:582] kubectl: 1.25.9, cluster: 1.27.3 (minor skew: 2)
	I0615 09:40:00.405982    1397 out.go:177] 
	W0615 09:40:00.410033    1397 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.3.
	I0615 09:40:00.413868    1397 out.go:177]   - Want kubectl v1.27.3? Try 'minikube kubectl -- get pods -A'
	I0615 09:40:00.421952    1397 out.go:177] * Done! kubectl is now configured to use "addons-477000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-06-15 16:33:29 UTC, ends at Thu 2023-06-15 17:01:53 UTC. --
	Jun 15 16:34:12 addons-477000 dockerd[1097]: time="2023-06-15T16:34:12.712778061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 15 16:34:12 addons-477000 dockerd[1097]: time="2023-06-15T16:34:12.712798672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 16:34:12 addons-477000 dockerd[1091]: time="2023-06-15T16:34:12.754255249Z" level=info msg="ignoring event" container=784517c6a1ba995d5a0ac3e06a0c3b3112e1ab80318bad66824ce915bcadf04c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 15 16:34:12 addons-477000 dockerd[1097]: time="2023-06-15T16:34:12.754320625Z" level=info msg="shim disconnected" id=784517c6a1ba995d5a0ac3e06a0c3b3112e1ab80318bad66824ce915bcadf04c namespace=moby
	Jun 15 16:34:12 addons-477000 dockerd[1097]: time="2023-06-15T16:34:12.754344286Z" level=warning msg="cleaning up after shim disconnected" id=784517c6a1ba995d5a0ac3e06a0c3b3112e1ab80318bad66824ce915bcadf04c namespace=moby
	Jun 15 16:34:12 addons-477000 dockerd[1097]: time="2023-06-15T16:34:12.754349480Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 15 16:34:13 addons-477000 dockerd[1097]: time="2023-06-15T16:34:13.793607046Z" level=info msg="shim disconnected" id=84f78f0eaf8dda7e1caea638bf41a2296e9ad897543494ce93c4491f7a6bcb47 namespace=moby
	Jun 15 16:34:13 addons-477000 dockerd[1091]: time="2023-06-15T16:34:13.793691952Z" level=info msg="ignoring event" container=84f78f0eaf8dda7e1caea638bf41a2296e9ad897543494ce93c4491f7a6bcb47 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 15 16:34:13 addons-477000 dockerd[1097]: time="2023-06-15T16:34:13.794164777Z" level=warning msg="cleaning up after shim disconnected" id=84f78f0eaf8dda7e1caea638bf41a2296e9ad897543494ce93c4491f7a6bcb47 namespace=moby
	Jun 15 16:34:13 addons-477000 dockerd[1097]: time="2023-06-15T16:34:13.794176995Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 15 16:34:14 addons-477000 dockerd[1091]: time="2023-06-15T16:34:14.817608715Z" level=info msg="ignoring event" container=08da339f6e51c0a36beb905cf51aaa6ebe12eed98abda59168295fcb62adf9d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 15 16:34:14 addons-477000 dockerd[1097]: time="2023-06-15T16:34:14.817442834Z" level=info msg="shim disconnected" id=08da339f6e51c0a36beb905cf51aaa6ebe12eed98abda59168295fcb62adf9d0 namespace=moby
	Jun 15 16:34:14 addons-477000 dockerd[1097]: time="2023-06-15T16:34:14.819525140Z" level=warning msg="cleaning up after shim disconnected" id=08da339f6e51c0a36beb905cf51aaa6ebe12eed98abda59168295fcb62adf9d0 namespace=moby
	Jun 15 16:34:14 addons-477000 dockerd[1097]: time="2023-06-15T16:34:14.819574276Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 15 16:34:15 addons-477000 dockerd[1097]: time="2023-06-15T16:34:15.579995441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 15 16:34:15 addons-477000 dockerd[1097]: time="2023-06-15T16:34:15.580279599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 16:34:15 addons-477000 dockerd[1097]: time="2023-06-15T16:34:15.580317285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 15 16:34:15 addons-477000 dockerd[1097]: time="2023-06-15T16:34:15.580357122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 16:34:15 addons-477000 cri-dockerd[991]: time="2023-06-15T16:34:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0c316c00ee755585c1753e0f1d6364e1731871da5d072484c67c43cac67cd349/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 15 16:34:15 addons-477000 dockerd[1091]: time="2023-06-15T16:34:15.926803390Z" level=warning msg="reference for unknown type: " digest="sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Jun 15 16:34:18 addons-477000 cri-dockerd[991]: time="2023-06-15T16:34:18Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Jun 15 16:34:18 addons-477000 dockerd[1097]: time="2023-06-15T16:34:18.269480431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 15 16:34:18 addons-477000 dockerd[1097]: time="2023-06-15T16:34:18.269881621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 16:34:18 addons-477000 dockerd[1097]: time="2023-06-15T16:34:18.269894061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 15 16:34:18 addons-477000 dockerd[1097]: time="2023-06-15T16:34:18.269898788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                     CREATED             STATE               NAME                         ATTEMPT             POD ID
	6a4bcd8ac64ff       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf              27 minutes ago      Running             gcp-auth                     0                   0c316c00ee755
	8527d6f42bef1       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   27 minutes ago      Running             volume-snapshot-controller   0                   f6bd41ad4abf6
	06a9dab9c48b6       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   27 minutes ago      Running             volume-snapshot-controller   0                   629404aaee996
	256eaaad3894a       97e04611ad434                                                                                                             27 minutes ago      Running             coredns                      0                   f6fc2a0d05c4a
	29b72a92c6578       fb73e92641fd5                                                                                                             27 minutes ago      Running             kube-proxy                   0                   405ca9198a355
	733213e41e3b9       bcb9e554eaab6                                                                                                             28 minutes ago      Running             kube-scheduler               0                   25817e506c78b
	b11fb0f325644       39dfb036b0986                                                                                                             28 minutes ago      Running             kube-apiserver               0                   0dde73a500899
	66de98cb24ea0       ab3683b584ae5                                                                                                             28 minutes ago      Running             kube-controller-manager      0                   69ef168f52131
	41a6909f99a59       24bc64e911039                                                                                                             28 minutes ago      Running             etcd                         0                   9b969e901cc05
	
	* 
	* ==> coredns [256eaaad3894] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:55502 - 31535 "HINFO IN 8156761713541019547.3807690688336836625. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.006087175s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-477000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-477000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1814abc5e40384accb8747bfb7e33027343c9627
	                    minikube.k8s.io/name=addons-477000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_15T09_33_47_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Jun 2023 16:33:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-477000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Jun 2023 17:01:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Jun 2023 17:00:23 +0000   Thu, 15 Jun 2023 16:33:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Jun 2023 17:00:23 +0000   Thu, 15 Jun 2023 16:33:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Jun 2023 17:00:23 +0000   Thu, 15 Jun 2023 16:33:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Jun 2023 17:00:23 +0000   Thu, 15 Jun 2023 16:33:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-477000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 5009a87f17804889a5a4616073b937e0
	  System UUID:                5009a87f17804889a5a4616073b937e0
	  Boot ID:                    9630f686-3c90-436f-98e6-d8c6686f510a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-58478865f7-2pgxv                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-5d78c9869d-mds5s                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     27m
	  kube-system                 etcd-addons-477000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-addons-477000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-addons-477000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-8rgcs                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 kube-scheduler-addons-477000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 snapshot-controller-75bbb956b9-p6hk4     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 snapshot-controller-75bbb956b9-prqv8     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27m                kube-proxy       
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet          Node addons-477000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet          Node addons-477000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet          Node addons-477000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node addons-477000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node addons-477000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node addons-477000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                28m                kubelet          Node addons-477000 status is now: NodeReady
	  Normal  RegisteredNode           27m                node-controller  Node addons-477000 event: Registered Node addons-477000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] KASLR disabled due to lack of seed
	[  +0.687452] EINJ: EINJ table not found.
	[  +0.627011] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043411] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000812] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.868427] systemd-fstab-generator[480]: Ignoring "noauto" for root device
	[  +0.067044] systemd-fstab-generator[491]: Ignoring "noauto" for root device
	[  +0.422105] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.174556] systemd-fstab-generator[729]: Ignoring "noauto" for root device
	[  +0.069729] systemd-fstab-generator[740]: Ignoring "noauto" for root device
	[  +0.066761] systemd-fstab-generator[753]: Ignoring "noauto" for root device
	[  +1.220689] systemd-fstab-generator[910]: Ignoring "noauto" for root device
	[  +0.067164] systemd-fstab-generator[921]: Ignoring "noauto" for root device
	[  +0.058616] systemd-fstab-generator[932]: Ignoring "noauto" for root device
	[  +0.062347] systemd-fstab-generator[943]: Ignoring "noauto" for root device
	[  +0.069889] systemd-fstab-generator[971]: Ignoring "noauto" for root device
	[  +2.576383] systemd-fstab-generator[1084]: Ignoring "noauto" for root device
	[  +1.530737] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.580821] systemd-fstab-generator[1404]: Ignoring "noauto" for root device
	[  +5.139726] systemd-fstab-generator[2250]: Ignoring "noauto" for root device
	[Jun15 16:34] kauditd_printk_skb: 41 callbacks suppressed
	[  +2.392909] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +4.125194] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.280200] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.114632] kauditd_printk_skb: 3 callbacks suppressed
	
	* 
	* ==> etcd [41a6909f99a5] <==
	* {"level":"info","ts":"2023-06-15T16:33:43.362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2023-06-15T16:33:43.362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-06-15T16:33:43.362Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-477000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-15T16:33:43.362Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-15T16:33:43.362Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-15T16:33:43.373Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-15T16:33:43.373Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-06-15T16:43:43.957Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":747}
	{"level":"info","ts":"2023-06-15T16:43:43.961Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":747,"took":"2.441695ms","hash":524925281}
	{"level":"info","ts":"2023-06-15T16:43:43.961Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":524925281,"revision":747,"compact-revision":-1}
	{"level":"info","ts":"2023-06-15T16:48:43.971Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":897}
	{"level":"info","ts":"2023-06-15T16:48:43.973Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":897,"took":"1.284283ms","hash":2514030906}
	{"level":"info","ts":"2023-06-15T16:48:43.973Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2514030906,"revision":897,"compact-revision":747}
	{"level":"info","ts":"2023-06-15T16:53:43.979Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1048}
	{"level":"info","ts":"2023-06-15T16:53:43.981Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1048,"took":"857.726µs","hash":834622362}
	{"level":"info","ts":"2023-06-15T16:53:43.981Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":834622362,"revision":1048,"compact-revision":897}
	{"level":"info","ts":"2023-06-15T16:58:43.988Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1198}
	{"level":"info","ts":"2023-06-15T16:58:43.991Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1198,"took":"1.183357ms","hash":1870194874}
	{"level":"info","ts":"2023-06-15T16:58:43.991Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1870194874,"revision":1198,"compact-revision":1048}
	
	* 
	* ==> gcp-auth [6a4bcd8ac64f] <==
	* 2023/06/15 16:34:18 GCP Auth Webhook started!
	
	* 
	* ==> kernel <==
	*  17:01:53 up 28 min,  0 users,  load average: 0.62, 0.54, 0.45
	Linux addons-477000 5.10.57 #1 SMP PREEMPT Wed Jun 14 05:08:37 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [b11fb0f32564] <==
	* I0615 16:34:07.712516       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0615 16:38:44.748912       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:38:44.749316       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:38:44.757965       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:38:44.758323       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:43:44.761094       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:43:44.761179       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:43:44.769254       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:43:44.769477       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:43:44.769594       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:43:44.769676       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:48:44.750393       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:48:44.750930       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:48:44.765734       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:48:44.766097       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:53:44.751144       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:53:44.751426       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:53:44.766204       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:53:44.766395       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:58:44.752713       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:58:44.753513       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:58:44.754419       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:58:44.754518       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:58:44.763858       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:58:44.764268       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [66de98cb24ea] <==
	* I0615 16:34:13.731338       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:13.817171       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:14.755476       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:14.766120       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:14.820829       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:14.823621       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:14.825690       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0615 16:34:14.825754       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:14.826668       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:14.850370       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:15.758931       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:15.761497       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:15.764164       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0615 16:34:15.764226       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:15.766220       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:15.768259       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:29.767346       1 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0615 16:34:29.767460       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0615 16:34:29.868459       1 shared_informer.go:318] Caches are synced for resource quota
	I0615 16:34:30.190420       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0615 16:34:30.296099       1 shared_informer.go:318] Caches are synced for garbage collector
	I0615 16:34:44.034182       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:44.057184       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:45.016712       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:45.039501       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	
	* 
	* ==> kube-proxy [29b72a92c657] <==
	* I0615 16:34:01.157223       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0615 16:34:01.157274       1 server_others.go:110] "Detected node IP" address="192.168.105.2"
	I0615 16:34:01.157290       1 server_others.go:554] "Using iptables proxy"
	I0615 16:34:01.207136       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0615 16:34:01.207158       1 server_others.go:192] "Using iptables Proxier"
	I0615 16:34:01.207188       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0615 16:34:01.207493       1 server.go:658] "Version info" version="v1.27.3"
	I0615 16:34:01.207499       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0615 16:34:01.208029       1 config.go:188] "Starting service config controller"
	I0615 16:34:01.208049       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0615 16:34:01.208060       1 config.go:97] "Starting endpoint slice config controller"
	I0615 16:34:01.208062       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0615 16:34:01.209533       1 config.go:315] "Starting node config controller"
	I0615 16:34:01.209537       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0615 16:34:01.308743       1 shared_informer.go:318] Caches are synced for service config
	I0615 16:34:01.308782       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0615 16:34:01.309993       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [733213e41e3b] <==
	* W0615 16:33:44.753904       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0615 16:33:44.754011       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0615 16:33:44.754034       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0615 16:33:44.754072       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0615 16:33:44.754021       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0615 16:33:44.754081       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0615 16:33:44.754001       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0615 16:33:44.754100       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0615 16:33:44.754136       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0615 16:33:44.754145       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0615 16:33:45.605616       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0615 16:33:45.605673       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0615 16:33:45.647245       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0615 16:33:45.647292       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0615 16:33:45.699650       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0615 16:33:45.699699       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0615 16:33:45.702358       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0615 16:33:45.702403       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0615 16:33:45.718371       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0615 16:33:45.718408       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0615 16:33:45.723261       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0615 16:33:45.723281       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0615 16:33:45.755043       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0615 16:33:45.755066       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0615 16:33:46.350596       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-06-15 16:33:29 UTC, ends at Thu 2023-06-15 17:01:53 UTC. --
	Jun 15 16:56:47 addons-477000 kubelet[2256]: E0615 16:56:47.333470    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 16:56:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 16:56:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 16:56:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 15 16:57:47 addons-477000 kubelet[2256]: E0615 16:57:47.331859    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 16:57:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 16:57:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 16:57:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 15 16:58:47 addons-477000 kubelet[2256]: W0615 16:58:47.318879    2256 machine.go:65] Cannot read vendor id correctly, set empty.
	Jun 15 16:58:47 addons-477000 kubelet[2256]: E0615 16:58:47.332386    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 16:58:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 16:58:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 16:58:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 15 16:59:47 addons-477000 kubelet[2256]: E0615 16:59:47.331594    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 16:59:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 16:59:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 16:59:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 15 17:00:47 addons-477000 kubelet[2256]: E0615 17:00:47.331224    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 17:00:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 17:00:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 17:00:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 15 17:01:47 addons-477000 kubelet[2256]: E0615 17:01:47.337776    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 17:01:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 17:01:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 17:01:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-477000 -n addons-477000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-477000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/InspektorGadget FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/InspektorGadget (480.84s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (720.81s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:381: failed waiting for metrics-server deployment to stabilize: timed out waiting for the condition
addons_test.go:383: metrics-server stabilized in 6m0.002019209s
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
addons_test.go:385: ***** TestAddons/parallel/MetricsServer: pod "k8s-app=metrics-server" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:385: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-477000 -n addons-477000
addons_test.go:385: TestAddons/parallel/MetricsServer: showing logs for failed pods as of 2023-06-15 10:04:01.513927 -0700 PDT m=+1895.798774876
addons_test.go:386: failed waiting for k8s-app=metrics-server pod: k8s-app=metrics-server within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-477000 -n addons-477000
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-477000 logs -n 25
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-066000 | jenkins | v1.30.1 | 15 Jun 23 09:32 PDT |                     |
	|         | -p download-only-066000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-066000 | jenkins | v1.30.1 | 15 Jun 23 09:32 PDT |                     |
	|         | -p download-only-066000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT | 15 Jun 23 09:33 PDT |
	| delete  | -p download-only-066000        | download-only-066000 | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT | 15 Jun 23 09:33 PDT |
	| delete  | -p download-only-066000        | download-only-066000 | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT | 15 Jun 23 09:33 PDT |
	| start   | --download-only -p             | binary-mirror-062000 | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT |                     |
	|         | binary-mirror-062000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49310         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-062000        | binary-mirror-062000 | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT | 15 Jun 23 09:33 PDT |
	| start   | -p addons-477000               | addons-477000        | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT | 15 Jun 23 09:40 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-477000        | jenkins | v1.30.1 | 15 Jun 23 09:52 PDT |                     |
	|         | addons-477000                  |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-477000        | jenkins | v1.30.1 | 15 Jun 23 10:01 PDT | 15 Jun 23 10:01 PDT |
	|         | -p addons-477000               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/15 09:33:17
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0615 09:33:17.135905    1397 out.go:296] Setting OutFile to fd 1 ...
	I0615 09:33:17.136030    1397 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 09:33:17.136034    1397 out.go:309] Setting ErrFile to fd 2...
	I0615 09:33:17.136037    1397 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 09:33:17.136120    1397 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 09:33:17.137161    1397 out.go:303] Setting JSON to false
	I0615 09:33:17.152121    1397 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":168,"bootTime":1686846629,"procs":371,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 09:33:17.152202    1397 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 09:33:17.156891    1397 out.go:177] * [addons-477000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 09:33:17.159887    1397 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 09:33:17.163775    1397 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 09:33:17.159984    1397 notify.go:220] Checking for updates...
	I0615 09:33:17.171800    1397 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 09:33:17.174813    1397 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 09:33:17.177828    1397 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 09:33:17.180704    1397 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 09:33:17.183887    1397 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 09:33:17.187819    1397 out.go:177] * Using the qemu2 driver based on user configuration
	I0615 09:33:17.194783    1397 start.go:297] selected driver: qemu2
	I0615 09:33:17.194788    1397 start.go:884] validating driver "qemu2" against <nil>
	I0615 09:33:17.194794    1397 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 09:33:17.196752    1397 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0615 09:33:17.199793    1397 out.go:177] * Automatically selected the socket_vmnet network
	I0615 09:33:17.201307    1397 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 09:33:17.201331    1397 cni.go:84] Creating CNI manager for ""
	I0615 09:33:17.201335    1397 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 09:33:17.201341    1397 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0615 09:33:17.201349    1397 start_flags.go:319] config:
	{Name:addons-477000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-477000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 09:33:17.201434    1397 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 09:33:17.209829    1397 out.go:177] * Starting control plane node addons-477000 in cluster addons-477000
	I0615 09:33:17.213726    1397 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 09:33:17.213749    1397 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 09:33:17.213768    1397 cache.go:57] Caching tarball of preloaded images
	I0615 09:33:17.213824    1397 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 09:33:17.213830    1397 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 09:33:17.214051    1397 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/config.json ...
	I0615 09:33:17.214063    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/config.json: {Name:mkc1c34b82952aae697463d2d78c6ea098445790 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:17.214292    1397 start.go:365] acquiring machines lock for addons-477000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 09:33:17.214399    1397 start.go:369] acquired machines lock for "addons-477000" in 101.583µs
	I0615 09:33:17.214409    1397 start.go:93] Provisioning new machine with config: &{Name:addons-477000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.3 ClusterName:addons-477000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 09:33:17.214436    1397 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 09:33:17.221743    1397 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0615 09:33:17.568031    1397 start.go:159] libmachine.API.Create for "addons-477000" (driver="qemu2")
	I0615 09:33:17.568071    1397 client.go:168] LocalClient.Create starting
	I0615 09:33:17.568226    1397 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 09:33:17.626803    1397 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 09:33:17.737973    1397 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 09:33:17.902563    1397 main.go:141] libmachine: Creating SSH key...
	I0615 09:33:17.968617    1397 main.go:141] libmachine: Creating Disk image...
	I0615 09:33:17.968623    1397 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 09:33:17.969817    1397 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2
	I0615 09:33:18.004891    1397 main.go:141] libmachine: STDOUT: 
	I0615 09:33:18.004923    1397 main.go:141] libmachine: STDERR: 
	I0615 09:33:18.004982    1397 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2 +20000M
	I0615 09:33:18.012411    1397 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 09:33:18.012436    1397 main.go:141] libmachine: STDERR: 
	I0615 09:33:18.012455    1397 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2
	I0615 09:33:18.012466    1397 main.go:141] libmachine: Starting QEMU VM...
	I0615 09:33:18.012501    1397 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:25:cc:0f:2e:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2
	I0615 09:33:18.081537    1397 main.go:141] libmachine: STDOUT: 
	I0615 09:33:18.081558    1397 main.go:141] libmachine: STDERR: 
	I0615 09:33:18.081562    1397 main.go:141] libmachine: Attempt 0
	I0615 09:33:18.081577    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:20.083712    1397 main.go:141] libmachine: Attempt 1
	I0615 09:33:20.083961    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:22.086122    1397 main.go:141] libmachine: Attempt 2
	I0615 09:33:22.086166    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:24.088201    1397 main.go:141] libmachine: Attempt 3
	I0615 09:33:24.088224    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:26.090268    1397 main.go:141] libmachine: Attempt 4
	I0615 09:33:26.090325    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:28.092361    1397 main.go:141] libmachine: Attempt 5
	I0615 09:33:28.092379    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:30.094488    1397 main.go:141] libmachine: Attempt 6
	I0615 09:33:30.094575    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:30.094985    1397 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0615 09:33:30.095099    1397 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:25:cc:f:2e:6f ID:1,1a:25:cc:f:2e:6f Lease:0x648c8ed9}
	I0615 09:33:30.095124    1397 main.go:141] libmachine: Found match: 1a:25:cc:f:2e:6f
	I0615 09:33:30.095168    1397 main.go:141] libmachine: IP: 192.168.105.2
	I0615 09:33:30.095195    1397 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0615 09:33:32.115264    1397 machine.go:88] provisioning docker machine ...
	I0615 09:33:32.115338    1397 buildroot.go:166] provisioning hostname "addons-477000"
	I0615 09:33:32.116828    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.117588    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.117607    1397 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-477000 && echo "addons-477000" | sudo tee /etc/hostname
	I0615 09:33:32.199158    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-477000
	
	I0615 09:33:32.199283    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.199748    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.199763    1397 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-477000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-477000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-477000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0615 09:33:32.260846    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0615 09:33:32.260864    1397 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16718-868/.minikube CaCertPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16718-868/.minikube}
	I0615 09:33:32.260878    1397 buildroot.go:174] setting up certificates
	I0615 09:33:32.260906    1397 provision.go:83] configureAuth start
	I0615 09:33:32.260912    1397 provision.go:138] copyHostCerts
	I0615 09:33:32.261103    1397 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16718-868/.minikube/ca.pem (1078 bytes)
	I0615 09:33:32.261436    1397 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16718-868/.minikube/cert.pem (1123 bytes)
	I0615 09:33:32.262101    1397 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16718-868/.minikube/key.pem (1679 bytes)
	I0615 09:33:32.262442    1397 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16718-868/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca-key.pem org=jenkins.addons-477000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-477000]
	I0615 09:33:32.306279    1397 provision.go:172] copyRemoteCerts
	I0615 09:33:32.306343    1397 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0615 09:33:32.306360    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:33:32.335305    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0615 09:33:32.343471    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0615 09:33:32.351180    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0615 09:33:32.358490    1397 provision.go:86] duration metric: configureAuth took 97.576167ms
	I0615 09:33:32.358498    1397 buildroot.go:189] setting minikube options for container-runtime
	I0615 09:33:32.358950    1397 config.go:182] Loaded profile config "addons-477000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 09:33:32.358995    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.359216    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.359220    1397 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0615 09:33:32.410196    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0615 09:33:32.410204    1397 buildroot.go:70] root file system type: tmpfs
	I0615 09:33:32.410261    1397 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0615 09:33:32.410301    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.410550    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.410587    1397 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0615 09:33:32.468329    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0615 09:33:32.468380    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.468634    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.468643    1397 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0615 09:33:32.794674    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0615 09:33:32.794698    1397 machine.go:91] provisioned docker machine in 679.423792ms
	I0615 09:33:32.794704    1397 client.go:171] LocalClient.Create took 15.226996125s
	I0615 09:33:32.794723    1397 start.go:167] duration metric: libmachine.API.Create for "addons-477000" took 15.227064791s
	I0615 09:33:32.794726    1397 start.go:300] post-start starting for "addons-477000" (driver="qemu2")
	I0615 09:33:32.794731    1397 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0615 09:33:32.794812    1397 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0615 09:33:32.794822    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:33:32.823879    1397 ssh_runner.go:195] Run: cat /etc/os-release
	I0615 09:33:32.825122    1397 info.go:137] Remote host: Buildroot 2021.02.12
	I0615 09:33:32.825128    1397 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16718-868/.minikube/addons for local assets ...
	I0615 09:33:32.825196    1397 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16718-868/.minikube/files for local assets ...
	I0615 09:33:32.825222    1397 start.go:303] post-start completed in 30.494125ms
	I0615 09:33:32.825555    1397 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/config.json ...
	I0615 09:33:32.825707    1397 start.go:128] duration metric: createHost completed in 15.611646375s
	I0615 09:33:32.825734    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.825947    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.825951    1397 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0615 09:33:32.876753    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: 1686846813.320713543
	
	I0615 09:33:32.876760    1397 fix.go:206] guest clock: 1686846813.320713543
	I0615 09:33:32.876764    1397 fix.go:219] Guest: 2023-06-15 09:33:33.320713543 -0700 PDT Remote: 2023-06-15 09:33:32.825711 -0700 PDT m=+15.708594751 (delta=495.002543ms)
	I0615 09:33:32.876775    1397 fix.go:190] guest clock delta is within tolerance: 495.002543ms
	I0615 09:33:32.876778    1397 start.go:83] releasing machines lock for "addons-477000", held for 15.662753208s
	I0615 09:33:32.877060    1397 ssh_runner.go:195] Run: cat /version.json
	I0615 09:33:32.877067    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:33:32.877085    1397 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0615 09:33:32.877121    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:33:32.951587    1397 ssh_runner.go:195] Run: systemctl --version
	I0615 09:33:32.953983    1397 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0615 09:33:32.956008    1397 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0615 09:33:32.956040    1397 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0615 09:33:32.961754    1397 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0615 09:33:32.961761    1397 start.go:466] detecting cgroup driver to use...
	I0615 09:33:32.961877    1397 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0615 09:33:32.967359    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0615 09:33:32.970783    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0615 09:33:32.973872    1397 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0615 09:33:32.973908    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0615 09:33:32.976794    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0615 09:33:32.979871    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0615 09:33:32.983273    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0615 09:33:32.986847    1397 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0615 09:33:32.990009    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0615 09:33:32.992910    1397 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0615 09:33:32.995885    1397 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0615 09:33:32.999181    1397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 09:33:33.082046    1397 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0615 09:33:33.090305    1397 start.go:466] detecting cgroup driver to use...
	I0615 09:33:33.090367    1397 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0615 09:33:33.095444    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0615 09:33:33.099628    1397 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0615 09:33:33.106008    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0615 09:33:33.110583    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0615 09:33:33.115305    1397 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0615 09:33:33.157221    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0615 09:33:33.165685    1397 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0615 09:33:33.171700    1397 ssh_runner.go:195] Run: which cri-dockerd
	I0615 09:33:33.173347    1397 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0615 09:33:33.176671    1397 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0615 09:33:33.184036    1397 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0615 09:33:33.256172    1397 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0615 09:33:33.326477    1397 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0615 09:33:33.326492    1397 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0615 09:33:33.331797    1397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 09:33:33.394602    1397 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0615 09:33:34.551420    1397 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.1568305s)
	I0615 09:33:34.551480    1397 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0615 09:33:34.614918    1397 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0615 09:33:34.680379    1397 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0615 09:33:34.741670    1397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 09:33:34.802995    1397 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0615 09:33:34.810702    1397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 09:33:34.876193    1397 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0615 09:33:34.899281    1397 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0615 09:33:34.899375    1397 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0615 09:33:34.902005    1397 start.go:534] Will wait 60s for crictl version
	I0615 09:33:34.902039    1397 ssh_runner.go:195] Run: which crictl
	I0615 09:33:34.903665    1397 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0615 09:33:34.922827    1397 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0615 09:33:34.922910    1397 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0615 09:33:34.936535    1397 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0615 09:33:34.948006    1397 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.2 ...
	I0615 09:33:34.948101    1397 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0615 09:33:34.949468    1397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0615 09:33:34.953059    1397 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 09:33:34.953103    1397 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0615 09:33:34.958178    1397 docker.go:636] Got preloaded images: 
	I0615 09:33:34.958185    1397 docker.go:642] registry.k8s.io/kube-apiserver:v1.27.3 wasn't preloaded
	I0615 09:33:34.958223    1397 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0615 09:33:34.960919    1397 ssh_runner.go:195] Run: which lz4
	I0615 09:33:34.962156    1397 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0615 09:33:34.963566    1397 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0615 09:33:34.963580    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635922 bytes)
	I0615 09:33:36.258372    1397 docker.go:600] Took 1.296282 seconds to copy over tarball
	I0615 09:33:36.258440    1397 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0615 09:33:37.363016    1397 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.104582708s)
	I0615 09:33:37.363034    1397 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0615 09:33:37.379849    1397 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0615 09:33:37.383479    1397 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0615 09:33:37.388752    1397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 09:33:37.449408    1397 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0615 09:33:38.998025    1397 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.548624125s)
	I0615 09:33:38.998130    1397 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0615 09:33:39.004063    1397 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0615 09:33:39.004075    1397 cache_images.go:84] Images are preloaded, skipping loading
	I0615 09:33:39.004149    1397 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0615 09:33:39.011968    1397 cni.go:84] Creating CNI manager for ""
	I0615 09:33:39.011977    1397 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 09:33:39.012000    1397 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0615 09:33:39.012011    1397 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-477000 NodeName:addons-477000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0615 09:33:39.012111    1397 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-477000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0615 09:33:39.012156    1397 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-477000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:addons-477000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0615 09:33:39.012203    1397 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0615 09:33:39.015681    1397 binaries.go:44] Found k8s binaries, skipping transfer
	I0615 09:33:39.015718    1397 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0615 09:33:39.018849    1397 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0615 09:33:39.023920    1397 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0615 09:33:39.028733    1397 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0615 09:33:39.033571    1397 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0615 09:33:39.034818    1397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0615 09:33:39.038909    1397 certs.go:56] Setting up /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000 for IP: 192.168.105.2
	I0615 09:33:39.038918    1397 certs.go:190] acquiring lock for shared ca certs: {Name:mk9ee4d7ca68f2cc32c8609d33f6ce33c43a91d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.039073    1397 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key
	I0615 09:33:39.109209    1397 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt ...
	I0615 09:33:39.109214    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt: {Name:mka7538e8370ad0560f47e28d206b077e2dbbef8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.109425    1397 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key ...
	I0615 09:33:39.109428    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key: {Name:mkca6c7de675216938ac1a6663738af412e2d280 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.109532    1397 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key
	I0615 09:33:39.219574    1397 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.crt ...
	I0615 09:33:39.219577    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.crt: {Name:mk21a595039c96735254391e5270364a73e52306 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.219709    1397 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key ...
	I0615 09:33:39.219712    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key: {Name:mk96cab9f1987887c2b313cd365bdba518ec818b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.219826    1397 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.key
	I0615 09:33:39.219831    1397 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt with IP's: []
	I0615 09:33:39.435828    1397 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt ...
	I0615 09:33:39.435835    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt: {Name:mk0f2105a4c5fdba007e9c77c7945365dc3f96af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.436029    1397 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.key ...
	I0615 09:33:39.436031    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.key: {Name:mk65e491f4b4c1ee8d05045efb9265b2c697a551 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.436124    1397 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key.96055969
	I0615 09:33:39.436133    1397 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0615 09:33:39.510125    1397 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt.96055969 ...
	I0615 09:33:39.510129    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt.96055969: {Name:mk7c90d062166950585957cb3f0ce136594c9cd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.510277    1397 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key.96055969 ...
	I0615 09:33:39.510280    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key.96055969: {Name:mk4266445b8f2d5bc078d169ee24b8765955e2f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.510384    1397 certs.go:337] copying /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt
	I0615 09:33:39.510598    1397 certs.go:341] copying /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key
	I0615 09:33:39.510713    1397 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.key
	I0615 09:33:39.510723    1397 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.crt with IP's: []
	I0615 09:33:39.610633    1397 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.crt ...
	I0615 09:33:39.610637    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.crt: {Name:mk04e06c13fe3eccffb62f328096a02f5668baa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.610779    1397 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.key ...
	I0615 09:33:39.610783    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.key: {Name:mk21e3c8e84fcac9a2d9da5e0fa06b26ad1ee7cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.611042    1397 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca-key.pem (1679 bytes)
	I0615 09:33:39.611072    1397 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem (1078 bytes)
	I0615 09:33:39.611094    1397 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem (1123 bytes)
	I0615 09:33:39.611429    1397 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/key.pem (1679 bytes)
	I0615 09:33:39.611960    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0615 09:33:39.619546    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0615 09:33:39.626711    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0615 09:33:39.633501    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0615 09:33:39.640010    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0615 09:33:39.647112    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0615 09:33:39.654063    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0615 09:33:39.660582    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0615 09:33:39.667533    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0615 09:33:39.674545    1397 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0615 09:33:39.680339    1397 ssh_runner.go:195] Run: openssl version
	I0615 09:33:39.682379    1397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0615 09:33:39.685404    1397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0615 09:33:39.686832    1397 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 15 16:33 /usr/share/ca-certificates/minikubeCA.pem
	I0615 09:33:39.686855    1397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0615 09:33:39.688703    1397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0615 09:33:39.691957    1397 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0615 09:33:39.693381    1397 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0615 09:33:39.693418    1397 kubeadm.go:404] StartCluster: {Name:addons-477000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.3 ClusterName:addons-477000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 09:33:39.693485    1397 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0615 09:33:39.699291    1397 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0615 09:33:39.702928    1397 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0615 09:33:39.706168    1397 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0615 09:33:39.708902    1397 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0615 09:33:39.708925    1397 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0615 09:33:39.731022    1397 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0615 09:33:39.731051    1397 kubeadm.go:322] [preflight] Running pre-flight checks
	I0615 09:33:39.787198    1397 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0615 09:33:39.787252    1397 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0615 09:33:39.787291    1397 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0615 09:33:39.845524    1397 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0615 09:33:39.853720    1397 out.go:204]   - Generating certificates and keys ...
	I0615 09:33:39.853771    1397 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0615 09:33:39.853800    1397 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0615 09:33:40.047052    1397 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0615 09:33:40.281668    1397 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0615 09:33:40.373604    1397 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0615 09:33:40.496002    1397 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0615 09:33:40.752895    1397 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0615 09:33:40.752975    1397 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-477000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0615 09:33:40.889354    1397 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0615 09:33:40.889424    1397 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-477000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0615 09:33:40.967392    1397 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0615 09:33:41.132487    1397 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0615 09:33:41.175551    1397 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0615 09:33:41.175583    1397 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0615 09:33:41.275708    1397 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0615 09:33:41.313261    1397 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0615 09:33:41.394612    1397 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0615 09:33:41.488793    1397 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0615 09:33:41.495623    1397 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0615 09:33:41.495672    1397 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0615 09:33:41.495691    1397 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0615 09:33:41.565044    1397 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0615 09:33:41.570236    1397 out.go:204]   - Booting up control plane ...
	I0615 09:33:41.570302    1397 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0615 09:33:41.570344    1397 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0615 09:33:41.570389    1397 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0615 09:33:41.570430    1397 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0615 09:33:41.570514    1397 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0615 09:33:45.571765    1397 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.003295 seconds
	I0615 09:33:45.571857    1397 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0615 09:33:45.577408    1397 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0615 09:33:46.094757    1397 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0615 09:33:46.095006    1397 kubeadm.go:322] [mark-control-plane] Marking the node addons-477000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0615 09:33:46.613385    1397 kubeadm.go:322] [bootstrap-token] Using token: f4kg8y.q60xaa2tn5uwspbb
	I0615 09:33:46.619341    1397 out.go:204]   - Configuring RBAC rules ...
	I0615 09:33:46.619403    1397 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0615 09:33:46.620813    1397 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0615 09:33:46.624663    1397 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0615 09:33:46.625913    1397 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0615 09:33:46.627185    1397 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0615 09:33:46.628306    1397 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0615 09:33:46.632523    1397 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0615 09:33:46.806353    1397 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0615 09:33:47.022461    1397 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0615 09:33:47.022733    1397 kubeadm.go:322] 
	I0615 09:33:47.022774    1397 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0615 09:33:47.022780    1397 kubeadm.go:322] 
	I0615 09:33:47.022834    1397 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0615 09:33:47.022839    1397 kubeadm.go:322] 
	I0615 09:33:47.022851    1397 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0615 09:33:47.022879    1397 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0615 09:33:47.022912    1397 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0615 09:33:47.022916    1397 kubeadm.go:322] 
	I0615 09:33:47.022952    1397 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0615 09:33:47.022958    1397 kubeadm.go:322] 
	I0615 09:33:47.022992    1397 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0615 09:33:47.022995    1397 kubeadm.go:322] 
	I0615 09:33:47.023016    1397 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0615 09:33:47.023050    1397 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0615 09:33:47.023081    1397 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0615 09:33:47.023083    1397 kubeadm.go:322] 
	I0615 09:33:47.023121    1397 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0615 09:33:47.023158    1397 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0615 09:33:47.023161    1397 kubeadm.go:322] 
	I0615 09:33:47.023197    1397 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token f4kg8y.q60xaa2tn5uwspbb \
	I0615 09:33:47.023261    1397 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:13ec1e4f42825d0267b189c0f8b830eda54e60d63681b653b209256c58602b59 \
	I0615 09:33:47.023273    1397 kubeadm.go:322] 	--control-plane 
	I0615 09:33:47.023278    1397 kubeadm.go:322] 
	I0615 09:33:47.023320    1397 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0615 09:33:47.023326    1397 kubeadm.go:322] 
	I0615 09:33:47.023380    1397 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token f4kg8y.q60xaa2tn5uwspbb \
	I0615 09:33:47.023443    1397 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:13ec1e4f42825d0267b189c0f8b830eda54e60d63681b653b209256c58602b59 
	I0615 09:33:47.023525    1397 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0615 09:33:47.023586    1397 cni.go:84] Creating CNI manager for ""
	I0615 09:33:47.023594    1397 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 09:33:47.031274    1397 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0615 09:33:47.035321    1397 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0615 09:33:47.038799    1397 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0615 09:33:47.043709    1397 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0615 09:33:47.043747    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:47.043800    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=1814abc5e40384accb8747bfb7e33027343c9627 minikube.k8s.io/name=addons-477000 minikube.k8s.io/updated_at=2023_06_15T09_33_47_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:47.095849    1397 ops.go:34] apiserver oom_adj: -16
	I0615 09:33:47.095898    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:47.645147    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:48.145079    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:48.645093    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:49.145044    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:49.645148    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:50.145134    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:50.645328    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:51.145310    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:51.645116    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:52.144609    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:52.645278    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:53.145243    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:53.645239    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:54.145000    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:54.644744    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:55.145233    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:55.644949    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:56.145008    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:56.644938    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:57.143430    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:57.645224    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:58.144898    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:58.644909    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:59.144773    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:59.644338    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:34:00.144834    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:34:00.206574    1397 kubeadm.go:1081] duration metric: took 13.163176875s to wait for elevateKubeSystemPrivileges.
	I0615 09:34:00.206587    1397 kubeadm.go:406] StartCluster complete in 20.513668625s
	I0615 09:34:00.206614    1397 settings.go:142] acquiring lock: {Name:mk45a698fcd8dd8ae6984c9cf4ad4d183fdb5424 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:34:00.206769    1397 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 09:34:00.206961    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/kubeconfig: {Name:mkbe9cac04fb467055323f2e3d5db2c6ddc287ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:34:00.207185    1397 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0615 09:34:00.207249    1397 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0615 09:34:00.207317    1397 addons.go:66] Setting ingress=true in profile "addons-477000"
	I0615 09:34:00.207322    1397 addons.go:66] Setting ingress-dns=true in profile "addons-477000"
	I0615 09:34:00.207325    1397 addons.go:228] Setting addon ingress=true in "addons-477000"
	I0615 09:34:00.207327    1397 addons.go:228] Setting addon ingress-dns=true in "addons-477000"
	I0615 09:34:00.207355    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207358    1397 addons.go:66] Setting cloud-spanner=true in profile "addons-477000"
	I0615 09:34:00.207362    1397 addons.go:228] Setting addon cloud-spanner=true in "addons-477000"
	I0615 09:34:00.207371    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207402    1397 addons.go:66] Setting metrics-server=true in profile "addons-477000"
	I0615 09:34:00.207421    1397 addons.go:66] Setting registry=true in profile "addons-477000"
	I0615 09:34:00.207457    1397 addons.go:228] Setting addon registry=true in "addons-477000"
	I0615 09:34:00.207434    1397 addons.go:66] Setting inspektor-gadget=true in profile "addons-477000"
	I0615 09:34:00.207483    1397 addons.go:228] Setting addon inspektor-gadget=true in "addons-477000"
	I0615 09:34:00.207494    1397 addons.go:228] Setting addon metrics-server=true in "addons-477000"
	I0615 09:34:00.207502    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207355    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207433    1397 addons.go:66] Setting default-storageclass=true in profile "addons-477000"
	I0615 09:34:00.207531    1397 config.go:182] Loaded profile config "addons-477000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 09:34:00.207537    1397 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-477000"
	I0615 09:34:00.207575    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207475    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207436    1397 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-477000"
	I0615 09:34:00.207676    1397 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-477000"
	I0615 09:34:00.207687    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207317    1397 addons.go:66] Setting volumesnapshots=true in profile "addons-477000"
	I0615 09:34:00.207735    1397 addons.go:228] Setting addon volumesnapshots=true in "addons-477000"
	I0615 09:34:00.207746    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207438    1397 addons.go:66] Setting gcp-auth=true in profile "addons-477000"
	I0615 09:34:00.207776    1397 mustload.go:65] Loading cluster: addons-477000
	I0615 09:34:00.207433    1397 addons.go:66] Setting storage-provisioner=true in profile "addons-477000"
	I0615 09:34:00.208143    1397 addons.go:228] Setting addon storage-provisioner=true in "addons-477000"
	I0615 09:34:00.208157    1397 host.go:66] Checking if "addons-477000" exists ...
	W0615 09:34:00.208299    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208325    1397 addons.go:274] "addons-477000" is not running, setting registry=true and skipping enablement (err=<nil>)
	W0615 09:34:00.208331    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208338    1397 addons.go:274] "addons-477000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	I0615 09:34:00.208333    1397 addons.go:464] Verifying addon registry=true in "addons-477000"
	I0615 09:34:00.211629    1397 out.go:177] * Verifying registry addon...
	W0615 09:34:00.208373    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	I0615 09:34:00.207993    1397 config.go:182] Loaded profile config "addons-477000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	W0615 09:34:00.208397    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208133    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208437    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208601    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208662    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	I0615 09:34:00.215730    1397 addons.go:228] Setting addon default-storageclass=true in "addons-477000"
	W0615 09:34:00.218550    1397 addons.go:274] "addons-477000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	W0615 09:34:00.218559    1397 addons.go:274] "addons-477000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	W0615 09:34:00.218576    1397 addons.go:274] "addons-477000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	W0615 09:34:00.218576    1397 addons.go:274] "addons-477000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	W0615 09:34:00.218580    1397 addons.go:274] "addons-477000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	W0615 09:34:00.218593    1397 addons.go:274] "addons-477000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	I0615 09:34:00.218944    1397 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0615 09:34:00.219273    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.221576    1397 addons.go:464] Verifying addon metrics-server=true in "addons-477000"
	I0615 09:34:00.221583    1397 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0615 09:34:00.224623    1397 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0615 09:34:00.224631    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0615 09:34:00.224638    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:34:00.221629    1397 addons.go:464] Verifying addon ingress=true in "addons-477000"
	I0615 09:34:00.221636    1397 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-477000"
	I0615 09:34:00.221723    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.228543    1397 out.go:177] * Verifying ingress addon...
	I0615 09:34:00.238557    1397 out.go:177] * Verifying csi-hostpath-driver addon...
	I0615 09:34:00.229248    1397 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0615 09:34:00.235988    1397 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0615 09:34:00.241352    1397 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0615 09:34:00.242560    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0615 09:34:00.242594    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:34:00.242973    1397 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0615 09:34:00.245385    1397 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0615 09:34:00.251897    1397 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0615 09:34:00.275682    1397 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0615 09:34:00.278596    1397 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0615 09:34:00.278604    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0615 09:34:00.310787    1397 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0615 09:34:00.310799    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0615 09:34:00.340750    1397 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0615 09:34:00.340763    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0615 09:34:00.370147    1397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0615 09:34:00.374756    1397 addons.go:420] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0615 09:34:00.374766    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0615 09:34:00.391483    1397 addons.go:420] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0615 09:34:00.391493    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0615 09:34:00.396735    1397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0615 09:34:00.725129    1397 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-477000" context rescaled to 1 replicas
	I0615 09:34:00.725155    1397 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 09:34:00.731827    1397 out.go:177] * Verifying Kubernetes components...
	I0615 09:34:00.735986    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0615 09:34:01.130555    1397 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	W0615 09:34:01.273320    1397 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0615 09:34:01.273347    1397 retry.go:31] will retry after 358.412085ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0615 09:34:01.273795    1397 node_ready.go:35] waiting up to 6m0s for node "addons-477000" to be "Ready" ...
	I0615 09:34:01.275272    1397 node_ready.go:49] node "addons-477000" has status "Ready":"True"
	I0615 09:34:01.275281    1397 node_ready.go:38] duration metric: took 1.477792ms waiting for node "addons-477000" to be "Ready" ...
	I0615 09:34:01.275284    1397 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0615 09:34:01.279498    1397 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-c5j9m" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:01.633151    1397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0615 09:34:02.299497    1397 pod_ready.go:92] pod "coredns-5d78c9869d-c5j9m" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.299518    1397 pod_ready.go:81] duration metric: took 1.020034208s waiting for pod "coredns-5d78c9869d-c5j9m" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.299526    1397 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-mds5s" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.303736    1397 pod_ready.go:92] pod "coredns-5d78c9869d-mds5s" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.303743    1397 pod_ready.go:81] duration metric: took 4.212458ms waiting for pod "coredns-5d78c9869d-mds5s" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.303749    1397 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.307230    1397 pod_ready.go:92] pod "etcd-addons-477000" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.307237    1397 pod_ready.go:81] duration metric: took 3.484042ms waiting for pod "etcd-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.307243    1397 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.311004    1397 pod_ready.go:92] pod "kube-apiserver-addons-477000" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.311013    1397 pod_ready.go:81] duration metric: took 3.766916ms waiting for pod "kube-apiserver-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.311019    1397 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.481809    1397 pod_ready.go:92] pod "kube-controller-manager-addons-477000" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.481828    1397 pod_ready.go:81] duration metric: took 170.807958ms waiting for pod "kube-controller-manager-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.481838    1397 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8rgcs" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.883885    1397 pod_ready.go:92] pod "kube-proxy-8rgcs" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.883919    1397 pod_ready.go:81] duration metric: took 402.082375ms waiting for pod "kube-proxy-8rgcs" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.883933    1397 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:03.277736    1397 pod_ready.go:92] pod "kube-scheduler-addons-477000" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:03.277748    1397 pod_ready.go:81] duration metric: took 393.817875ms waiting for pod "kube-scheduler-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:03.277754    1397 pod_ready.go:38] duration metric: took 2.002511417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0615 09:34:03.277768    1397 api_server.go:52] waiting for apiserver process to appear ...
	I0615 09:34:03.277845    1397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0615 09:34:03.986804    1397 api_server.go:72] duration metric: took 3.261712416s to wait for apiserver process to appear ...
	I0615 09:34:03.986816    1397 api_server.go:88] waiting for apiserver healthz status ...
	I0615 09:34:03.986824    1397 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0615 09:34:03.986882    1397 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.353767917s)
	I0615 09:34:03.990093    1397 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0615 09:34:03.990734    1397 api_server.go:141] control plane version: v1.27.3
	I0615 09:34:03.990742    1397 api_server.go:131] duration metric: took 3.923291ms to wait for apiserver health ...
	I0615 09:34:03.990745    1397 system_pods.go:43] waiting for kube-system pods to appear ...
	I0615 09:34:03.993833    1397 system_pods.go:59] 9 kube-system pods found
	I0615 09:34:03.993840    1397 system_pods.go:61] "coredns-5d78c9869d-c5j9m" [029d925c-5975-4103-a809-e22de9cdf1be] Running
	I0615 09:34:03.993843    1397 system_pods.go:61] "coredns-5d78c9869d-mds5s" [6851e0cd-a4cb-4c11-8b37-6e07dedcb90e] Running
	I0615 09:34:03.993845    1397 system_pods.go:61] "etcd-addons-477000" [41610910-d77e-4a7d-8f06-bcf205880f06] Running
	I0615 09:34:03.993848    1397 system_pods.go:61] "kube-apiserver-addons-477000" [49f23173-0ac0-4711-9c55-a96c00aa0881] Running
	I0615 09:34:03.993851    1397 system_pods.go:61] "kube-controller-manager-addons-477000" [befaa390-5c21-4b4a-930f-d4c4c3559d3d] Running
	I0615 09:34:03.993853    1397 system_pods.go:61] "kube-proxy-8rgcs" [6b460d21-5a79-40c6-84a1-e0551f8e91b9] Running
	I0615 09:34:03.993855    1397 system_pods.go:61] "kube-scheduler-addons-477000" [995baab0-4f21-4690-baf3-4c4682320ca6] Running
	I0615 09:34:03.993859    1397 system_pods.go:61] "snapshot-controller-75bbb956b9-p6hk4" [f6c07a51-e0a4-4ed6-a2fb-f927be1aa9f7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0615 09:34:03.993864    1397 system_pods.go:61] "snapshot-controller-75bbb956b9-prqv8" [368ee712-d2ca-4c1b-9d87-bffa3c354d7c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0615 09:34:03.993866    1397 system_pods.go:74] duration metric: took 3.119166ms to wait for pod list to return data ...
	I0615 09:34:03.993869    1397 default_sa.go:34] waiting for default service account to be created ...
	I0615 09:34:03.995049    1397 default_sa.go:45] found service account: "default"
	I0615 09:34:03.995055    1397 default_sa.go:55] duration metric: took 1.183708ms for default service account to be created ...
	I0615 09:34:03.995057    1397 system_pods.go:116] waiting for k8s-apps to be running ...
	I0615 09:34:03.998400    1397 system_pods.go:86] 9 kube-system pods found
	I0615 09:34:03.998409    1397 system_pods.go:89] "coredns-5d78c9869d-c5j9m" [029d925c-5975-4103-a809-e22de9cdf1be] Running
	I0615 09:34:03.998411    1397 system_pods.go:89] "coredns-5d78c9869d-mds5s" [6851e0cd-a4cb-4c11-8b37-6e07dedcb90e] Running
	I0615 09:34:03.998414    1397 system_pods.go:89] "etcd-addons-477000" [41610910-d77e-4a7d-8f06-bcf205880f06] Running
	I0615 09:34:03.998416    1397 system_pods.go:89] "kube-apiserver-addons-477000" [49f23173-0ac0-4711-9c55-a96c00aa0881] Running
	I0615 09:34:03.998419    1397 system_pods.go:89] "kube-controller-manager-addons-477000" [befaa390-5c21-4b4a-930f-d4c4c3559d3d] Running
	I0615 09:34:03.998421    1397 system_pods.go:89] "kube-proxy-8rgcs" [6b460d21-5a79-40c6-84a1-e0551f8e91b9] Running
	I0615 09:34:03.998424    1397 system_pods.go:89] "kube-scheduler-addons-477000" [995baab0-4f21-4690-baf3-4c4682320ca6] Running
	I0615 09:34:03.998429    1397 system_pods.go:89] "snapshot-controller-75bbb956b9-p6hk4" [f6c07a51-e0a4-4ed6-a2fb-f927be1aa9f7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0615 09:34:03.998433    1397 system_pods.go:89] "snapshot-controller-75bbb956b9-prqv8" [368ee712-d2ca-4c1b-9d87-bffa3c354d7c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0615 09:34:03.998436    1397 system_pods.go:126] duration metric: took 3.376208ms to wait for k8s-apps to be running ...
	I0615 09:34:03.998439    1397 system_svc.go:44] waiting for kubelet service to be running ....
	I0615 09:34:03.998489    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0615 09:34:04.003913    1397 system_svc.go:56] duration metric: took 5.471458ms WaitForService to wait for kubelet.
	I0615 09:34:04.003921    1397 kubeadm.go:581] duration metric: took 3.278833625s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0615 09:34:04.003932    1397 node_conditions.go:102] verifying NodePressure condition ...
	I0615 09:34:04.077208    1397 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0615 09:34:04.077239    1397 node_conditions.go:123] node cpu capacity is 2
	I0615 09:34:04.077244    1397 node_conditions.go:105] duration metric: took 73.311333ms to run NodePressure ...
	I0615 09:34:04.077249    1397 start.go:228] waiting for startup goroutines ...
	I0615 09:34:06.831960    1397 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0615 09:34:06.832053    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:34:06.882622    1397 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0615 09:34:06.891297    1397 addons.go:228] Setting addon gcp-auth=true in "addons-477000"
	I0615 09:34:06.891339    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:06.892599    1397 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0615 09:34:06.892612    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:34:06.928262    1397 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0615 09:34:06.932997    1397 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0615 09:34:06.937187    1397 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0615 09:34:06.937194    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0615 09:34:06.943495    1397 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0615 09:34:06.943502    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0615 09:34:06.949337    1397 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0615 09:34:06.949343    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0615 09:34:06.954968    1397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0615 09:34:07.475178    1397 addons.go:464] Verifying addon gcp-auth=true in "addons-477000"
	I0615 09:34:07.478304    1397 out.go:177] * Verifying gcp-auth addon...
	I0615 09:34:07.485666    1397 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0615 09:34:07.491991    1397 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0615 09:34:07.492002    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:07.996710    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:08.496921    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:09.002133    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:09.494606    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:09.995080    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:10.495704    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:10.995530    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:11.495877    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:12.001470    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:12.497446    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:13.001473    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:13.502268    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:13.997362    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:14.503184    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:14.997798    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:15.495991    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:16.000278    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:16.501895    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:17.001719    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:17.495416    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:17.995757    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:18.496835    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:19.002478    1397 kapi.go:107] duration metric: took 11.51706925s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0615 09:34:19.008171    1397 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-477000 cluster.
	I0615 09:34:19.011889    1397 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0615 09:34:19.016120    1397 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0615 09:40:00.215049    1397 kapi.go:107] duration metric: took 6m0.00478975s to wait for kubernetes.io/minikube-addons=registry ...
	W0615 09:40:00.215455    1397 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0615 09:40:00.235888    1397 kapi.go:107] duration metric: took 6m0.001641708s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0615 09:40:00.235937    1397 kapi.go:107] duration metric: took 6m0.008679083s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0615 09:40:00.236028    1397 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	W0615 09:40:00.236088    1397 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0615 09:40:00.243957    1397 out.go:177] * Enabled addons: inspektor-gadget, metrics-server, cloud-spanner, ingress-dns, storage-provisioner, default-storageclass, volumesnapshots, gcp-auth
	I0615 09:40:00.250970    1397 addons.go:499] enable addons completed in 6m0.052449292s: enabled=[inspektor-gadget metrics-server cloud-spanner ingress-dns storage-provisioner default-storageclass volumesnapshots gcp-auth]
	I0615 09:40:00.251042    1397 start.go:233] waiting for cluster config update ...
	I0615 09:40:00.251069    1397 start.go:242] writing updated cluster config ...
	I0615 09:40:00.255738    1397 ssh_runner.go:195] Run: rm -f paused
	I0615 09:40:00.403218    1397 start.go:582] kubectl: 1.25.9, cluster: 1.27.3 (minor skew: 2)
	I0615 09:40:00.405982    1397 out.go:177] 
	W0615 09:40:00.410033    1397 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.3.
	I0615 09:40:00.413868    1397 out.go:177]   - Want kubectl v1.27.3? Try 'minikube kubectl -- get pods -A'
	I0615 09:40:00.421952    1397 out.go:177] * Done! kubectl is now configured to use "addons-477000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-06-15 16:33:29 UTC, ends at Thu 2023-06-15 17:04:01 UTC. --
	Jun 15 16:34:14 addons-477000 dockerd[1097]: time="2023-06-15T16:34:14.817442834Z" level=info msg="shim disconnected" id=08da339f6e51c0a36beb905cf51aaa6ebe12eed98abda59168295fcb62adf9d0 namespace=moby
	Jun 15 16:34:14 addons-477000 dockerd[1097]: time="2023-06-15T16:34:14.819525140Z" level=warning msg="cleaning up after shim disconnected" id=08da339f6e51c0a36beb905cf51aaa6ebe12eed98abda59168295fcb62adf9d0 namespace=moby
	Jun 15 16:34:14 addons-477000 dockerd[1097]: time="2023-06-15T16:34:14.819574276Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 15 16:34:15 addons-477000 dockerd[1097]: time="2023-06-15T16:34:15.579995441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 15 16:34:15 addons-477000 dockerd[1097]: time="2023-06-15T16:34:15.580279599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 16:34:15 addons-477000 dockerd[1097]: time="2023-06-15T16:34:15.580317285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 15 16:34:15 addons-477000 dockerd[1097]: time="2023-06-15T16:34:15.580357122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 16:34:15 addons-477000 cri-dockerd[991]: time="2023-06-15T16:34:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0c316c00ee755585c1753e0f1d6364e1731871da5d072484c67c43cac67cd349/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 15 16:34:15 addons-477000 dockerd[1091]: time="2023-06-15T16:34:15.926803390Z" level=warning msg="reference for unknown type: " digest="sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Jun 15 16:34:18 addons-477000 cri-dockerd[991]: time="2023-06-15T16:34:18Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Jun 15 16:34:18 addons-477000 dockerd[1097]: time="2023-06-15T16:34:18.269480431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 15 16:34:18 addons-477000 dockerd[1097]: time="2023-06-15T16:34:18.269881621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 16:34:18 addons-477000 dockerd[1097]: time="2023-06-15T16:34:18.269894061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 15 16:34:18 addons-477000 dockerd[1097]: time="2023-06-15T16:34:18.269898788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 17:01:55 addons-477000 dockerd[1097]: time="2023-06-15T17:01:55.073287389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 15 17:01:55 addons-477000 dockerd[1097]: time="2023-06-15T17:01:55.073319639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 17:01:55 addons-477000 dockerd[1097]: time="2023-06-15T17:01:55.073330764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 15 17:01:55 addons-477000 dockerd[1097]: time="2023-06-15T17:01:55.073337722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 17:01:55 addons-477000 cri-dockerd[991]: time="2023-06-15T17:01:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ef7c71126455ea790a0991802fb95ede312f12f8ea91a16a91dba330edd51c13/resolv.conf as [nameserver 10.96.0.10 search headlamp.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 15 17:01:55 addons-477000 dockerd[1091]: time="2023-06-15T17:01:55.472931913Z" level=warning msg="reference for unknown type: " digest="sha256:9c33d03e6032adc2ae0920bbda10f82aa223d796f99cfb9509608cd389a157be" remote="ghcr.io/headlamp-k8s/headlamp@sha256:9c33d03e6032adc2ae0920bbda10f82aa223d796f99cfb9509608cd389a157be"
	Jun 15 17:02:00 addons-477000 cri-dockerd[991]: time="2023-06-15T17:02:00Z" level=info msg="Stop pulling image ghcr.io/headlamp-k8s/headlamp:v0.17.1@sha256:9c33d03e6032adc2ae0920bbda10f82aa223d796f99cfb9509608cd389a157be: Status: Downloaded newer image for ghcr.io/headlamp-k8s/headlamp@sha256:9c33d03e6032adc2ae0920bbda10f82aa223d796f99cfb9509608cd389a157be"
	Jun 15 17:02:00 addons-477000 dockerd[1097]: time="2023-06-15T17:02:00.127950539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 15 17:02:00 addons-477000 dockerd[1097]: time="2023-06-15T17:02:00.127982330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 17:02:00 addons-477000 dockerd[1097]: time="2023-06-15T17:02:00.128158870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 15 17:02:00 addons-477000 dockerd[1097]: time="2023-06-15T17:02:00.128170953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                     CREATED             STATE               NAME                         ATTEMPT             POD ID
	9ee1f57e827a2       ghcr.io/headlamp-k8s/headlamp@sha256:9c33d03e6032adc2ae0920bbda10f82aa223d796f99cfb9509608cd389a157be                     2 minutes ago       Running             headlamp                     0                   ef7c71126455e
	6a4bcd8ac64ff       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf              29 minutes ago      Running             gcp-auth                     0                   0c316c00ee755
	8527d6f42bef1       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   29 minutes ago      Running             volume-snapshot-controller   0                   f6bd41ad4abf6
	06a9dab9c48b6       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   29 minutes ago      Running             volume-snapshot-controller   0                   629404aaee996
	256eaaad3894a       97e04611ad434                                                                                                             30 minutes ago      Running             coredns                      0                   f6fc2a0d05c4a
	29b72a92c6578       fb73e92641fd5                                                                                                             30 minutes ago      Running             kube-proxy                   0                   405ca9198a355
	733213e41e3b9       bcb9e554eaab6                                                                                                             30 minutes ago      Running             kube-scheduler               0                   25817e506c78b
	b11fb0f325644       39dfb036b0986                                                                                                             30 minutes ago      Running             kube-apiserver               0                   0dde73a500899
	66de98cb24ea0       ab3683b584ae5                                                                                                             30 minutes ago      Running             kube-controller-manager      0                   69ef168f52131
	41a6909f99a59       24bc64e911039                                                                                                             30 minutes ago      Running             etcd                         0                   9b969e901cc05
	
	* 
	* ==> coredns [256eaaad3894] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:55502 - 31535 "HINFO IN 8156761713541019547.3807690688336836625. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.006087175s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-477000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-477000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1814abc5e40384accb8747bfb7e33027343c9627
	                    minikube.k8s.io/name=addons-477000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_15T09_33_47_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Jun 2023 16:33:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-477000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Jun 2023 17:03:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Jun 2023 17:02:25 +0000   Thu, 15 Jun 2023 16:33:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Jun 2023 17:02:25 +0000   Thu, 15 Jun 2023 16:33:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Jun 2023 17:02:25 +0000   Thu, 15 Jun 2023 16:33:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Jun 2023 17:02:25 +0000   Thu, 15 Jun 2023 16:33:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-477000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 5009a87f17804889a5a4616073b937e0
	  System UUID:                5009a87f17804889a5a4616073b937e0
	  Boot ID:                    9630f686-3c90-436f-98e6-d8c6686f510a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-58478865f7-2pgxv                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  headlamp                    headlamp-6b5756787-s7gx7                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m7s
	  kube-system                 coredns-5d78c9869d-mds5s                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     30m
	  kube-system                 etcd-addons-477000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         30m
	  kube-system                 kube-apiserver-addons-477000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 kube-controller-manager-addons-477000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 kube-proxy-8rgcs                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 kube-scheduler-addons-477000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 snapshot-controller-75bbb956b9-p6hk4     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	  kube-system                 snapshot-controller-75bbb956b9-prqv8     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 30m                kube-proxy       
	  Normal  Starting                 30m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30m (x8 over 30m)  kubelet          Node addons-477000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30m (x8 over 30m)  kubelet          Node addons-477000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30m (x7 over 30m)  kubelet          Node addons-477000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  30m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 30m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  30m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  30m                kubelet          Node addons-477000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30m                kubelet          Node addons-477000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30m                kubelet          Node addons-477000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                30m                kubelet          Node addons-477000 status is now: NodeReady
	  Normal  RegisteredNode           30m                node-controller  Node addons-477000 event: Registered Node addons-477000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] KASLR disabled due to lack of seed
	[  +0.687452] EINJ: EINJ table not found.
	[  +0.627011] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043411] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000812] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.868427] systemd-fstab-generator[480]: Ignoring "noauto" for root device
	[  +0.067044] systemd-fstab-generator[491]: Ignoring "noauto" for root device
	[  +0.422105] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.174556] systemd-fstab-generator[729]: Ignoring "noauto" for root device
	[  +0.069729] systemd-fstab-generator[740]: Ignoring "noauto" for root device
	[  +0.066761] systemd-fstab-generator[753]: Ignoring "noauto" for root device
	[  +1.220689] systemd-fstab-generator[910]: Ignoring "noauto" for root device
	[  +0.067164] systemd-fstab-generator[921]: Ignoring "noauto" for root device
	[  +0.058616] systemd-fstab-generator[932]: Ignoring "noauto" for root device
	[  +0.062347] systemd-fstab-generator[943]: Ignoring "noauto" for root device
	[  +0.069889] systemd-fstab-generator[971]: Ignoring "noauto" for root device
	[  +2.576383] systemd-fstab-generator[1084]: Ignoring "noauto" for root device
	[  +1.530737] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.580821] systemd-fstab-generator[1404]: Ignoring "noauto" for root device
	[  +5.139726] systemd-fstab-generator[2250]: Ignoring "noauto" for root device
	[Jun15 16:34] kauditd_printk_skb: 41 callbacks suppressed
	[  +2.392909] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +4.125194] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.280200] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.114632] kauditd_printk_skb: 3 callbacks suppressed
	
	* 
	* ==> etcd [41a6909f99a5] <==
	* {"level":"info","ts":"2023-06-15T16:33:43.362Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-15T16:33:43.362Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-15T16:33:43.373Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-15T16:33:43.373Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-06-15T16:43:43.957Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":747}
	{"level":"info","ts":"2023-06-15T16:43:43.961Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":747,"took":"2.441695ms","hash":524925281}
	{"level":"info","ts":"2023-06-15T16:43:43.961Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":524925281,"revision":747,"compact-revision":-1}
	{"level":"info","ts":"2023-06-15T16:48:43.971Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":897}
	{"level":"info","ts":"2023-06-15T16:48:43.973Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":897,"took":"1.284283ms","hash":2514030906}
	{"level":"info","ts":"2023-06-15T16:48:43.973Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2514030906,"revision":897,"compact-revision":747}
	{"level":"info","ts":"2023-06-15T16:53:43.979Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1048}
	{"level":"info","ts":"2023-06-15T16:53:43.981Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1048,"took":"857.726µs","hash":834622362}
	{"level":"info","ts":"2023-06-15T16:53:43.981Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":834622362,"revision":1048,"compact-revision":897}
	{"level":"info","ts":"2023-06-15T16:58:43.988Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1198}
	{"level":"info","ts":"2023-06-15T16:58:43.991Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1198,"took":"1.183357ms","hash":1870194874}
	{"level":"info","ts":"2023-06-15T16:58:43.991Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1870194874,"revision":1198,"compact-revision":1048}
	{"level":"info","ts":"2023-06-15T17:03:44.005Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1348}
	{"level":"info","ts":"2023-06-15T17:03:44.007Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1348,"took":"1.080862ms","hash":686046031}
	{"level":"info","ts":"2023-06-15T17:03:44.008Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":686046031,"revision":1348,"compact-revision":1198}
	
	* 
	* ==> gcp-auth [6a4bcd8ac64f] <==
	* 2023/06/15 16:34:18 GCP Auth Webhook started!
	2023/06/15 17:01:54 Ready to marshal response ...
	2023/06/15 17:01:54 Ready to write response ...
	2023/06/15 17:01:54 Ready to marshal response ...
	2023/06/15 17:01:54 Ready to write response ...
	2023/06/15 17:01:54 Ready to marshal response ...
	2023/06/15 17:01:54 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  17:04:01 up 30 min,  0 users,  load average: 0.61, 0.54, 0.46
	Linux addons-477000 5.10.57 #1 SMP PREEMPT Wed Jun 14 05:08:37 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [b11fb0f32564] <==
	* I0615 16:43:44.769254       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:43:44.769477       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:43:44.769594       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:43:44.769676       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:48:44.750393       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:48:44.750930       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:48:44.765734       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:48:44.766097       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:53:44.751144       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:53:44.751426       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:53:44.766204       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:53:44.766395       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:58:44.752713       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:58:44.753513       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:58:44.754419       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:58:44.754518       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:58:44.763858       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:58:44.764268       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 17:01:54.687994       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs=map[IPv4:10.102.63.39]
	I0615 17:03:44.753468       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 17:03:44.753599       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 17:03:44.754120       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 17:03:44.754191       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 17:03:44.764221       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 17:03:44.764679       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [66de98cb24ea] <==
	* I0615 16:34:14.820829       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:14.823621       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:14.825690       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0615 16:34:14.825754       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:14.826668       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:14.850370       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:15.758931       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:15.761497       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:15.764164       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0615 16:34:15.764226       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:15.766220       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:15.768259       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:29.767346       1 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0615 16:34:29.767460       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0615 16:34:29.868459       1 shared_informer.go:318] Caches are synced for resource quota
	I0615 16:34:30.190420       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0615 16:34:30.296099       1 shared_informer.go:318] Caches are synced for garbage collector
	I0615 16:34:44.034182       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:44.057184       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:45.016712       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:45.039501       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 17:01:54.698531       1 event.go:307] "Event occurred" object="headlamp/headlamp" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set headlamp-6b5756787 to 1"
	I0615 17:01:54.702625       1 event.go:307] "Event occurred" object="headlamp/headlamp-6b5756787" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"headlamp-6b5756787-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found"
	E0615 17:01:54.708381       1 replica_set.go:544] sync "headlamp/headlamp-6b5756787" failed with pods "headlamp-6b5756787-" is forbidden: error looking up service account headlamp/headlamp: serviceaccount "headlamp" not found
	I0615 17:01:54.719597       1 event.go:307] "Event occurred" object="headlamp/headlamp-6b5756787" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: headlamp-6b5756787-s7gx7"
	
	* 
	* ==> kube-proxy [29b72a92c657] <==
	* I0615 16:34:01.157223       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0615 16:34:01.157274       1 server_others.go:110] "Detected node IP" address="192.168.105.2"
	I0615 16:34:01.157290       1 server_others.go:554] "Using iptables proxy"
	I0615 16:34:01.207136       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0615 16:34:01.207158       1 server_others.go:192] "Using iptables Proxier"
	I0615 16:34:01.207188       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0615 16:34:01.207493       1 server.go:658] "Version info" version="v1.27.3"
	I0615 16:34:01.207499       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0615 16:34:01.208029       1 config.go:188] "Starting service config controller"
	I0615 16:34:01.208049       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0615 16:34:01.208060       1 config.go:97] "Starting endpoint slice config controller"
	I0615 16:34:01.208062       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0615 16:34:01.209533       1 config.go:315] "Starting node config controller"
	I0615 16:34:01.209537       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0615 16:34:01.308743       1 shared_informer.go:318] Caches are synced for service config
	I0615 16:34:01.308782       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0615 16:34:01.309993       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [733213e41e3b] <==
	* W0615 16:33:44.753904       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0615 16:33:44.754011       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0615 16:33:44.754034       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0615 16:33:44.754072       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0615 16:33:44.754021       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0615 16:33:44.754081       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0615 16:33:44.754001       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0615 16:33:44.754100       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0615 16:33:44.754136       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0615 16:33:44.754145       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0615 16:33:45.605616       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0615 16:33:45.605673       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0615 16:33:45.647245       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0615 16:33:45.647292       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0615 16:33:45.699650       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0615 16:33:45.699699       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0615 16:33:45.702358       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0615 16:33:45.702403       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0615 16:33:45.718371       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0615 16:33:45.718408       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0615 16:33:45.723261       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0615 16:33:45.723281       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0615 16:33:45.755043       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0615 16:33:45.755066       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0615 16:33:46.350596       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-06-15 16:33:29 UTC, ends at Thu 2023-06-15 17:04:02 UTC. --
	Jun 15 17:00:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 17:00:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 15 17:01:47 addons-477000 kubelet[2256]: E0615 17:01:47.337776    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 17:01:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 17:01:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 17:01:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 15 17:01:54 addons-477000 kubelet[2256]: I0615 17:01:54.725995    2256 topology_manager.go:212] "Topology Admit Handler"
	Jun 15 17:01:54 addons-477000 kubelet[2256]: E0615 17:01:54.726029    2256 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="97aae91c-bd78-4045-80d6-0e820c3c1327" containerName="patch"
	Jun 15 17:01:54 addons-477000 kubelet[2256]: E0615 17:01:54.726035    2256 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="31753346-d9bd-4f8e-86ce-83f09e681ee4" containerName="create"
	Jun 15 17:01:54 addons-477000 kubelet[2256]: E0615 17:01:54.726038    2256 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="97aae91c-bd78-4045-80d6-0e820c3c1327" containerName="patch"
	Jun 15 17:01:54 addons-477000 kubelet[2256]: I0615 17:01:54.726051    2256 memory_manager.go:346] "RemoveStaleState removing state" podUID="31753346-d9bd-4f8e-86ce-83f09e681ee4" containerName="create"
	Jun 15 17:01:54 addons-477000 kubelet[2256]: I0615 17:01:54.726054    2256 memory_manager.go:346] "RemoveStaleState removing state" podUID="97aae91c-bd78-4045-80d6-0e820c3c1327" containerName="patch"
	Jun 15 17:01:54 addons-477000 kubelet[2256]: I0615 17:01:54.726057    2256 memory_manager.go:346] "RemoveStaleState removing state" podUID="97aae91c-bd78-4045-80d6-0e820c3c1327" containerName="patch"
	Jun 15 17:01:54 addons-477000 kubelet[2256]: I0615 17:01:54.870998    2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzmhv\" (UniqueName: \"kubernetes.io/projected/c8b4fd92-ec77-4410-a719-d72bf0a83b8a-kube-api-access-hzmhv\") pod \"headlamp-6b5756787-s7gx7\" (UID: \"c8b4fd92-ec77-4410-a719-d72bf0a83b8a\") " pod="headlamp/headlamp-6b5756787-s7gx7"
	Jun 15 17:01:54 addons-477000 kubelet[2256]: I0615 17:01:54.871044    2256 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/c8b4fd92-ec77-4410-a719-d72bf0a83b8a-gcp-creds\") pod \"headlamp-6b5756787-s7gx7\" (UID: \"c8b4fd92-ec77-4410-a719-d72bf0a83b8a\") " pod="headlamp/headlamp-6b5756787-s7gx7"
	Jun 15 17:02:00 addons-477000 kubelet[2256]: I0615 17:02:00.793128    2256 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="headlamp/headlamp-6b5756787-s7gx7" podStartSLOduration=1.935454831 podCreationTimestamp="2023-06-15 17:01:54 +0000 UTC" firstStartedPulling="2023-06-15 17:01:55.220131291 +0000 UTC m=+1687.989604034" lastFinishedPulling="2023-06-15 17:02:00.077767766 +0000 UTC m=+1692.847240509" observedRunningTime="2023-06-15 17:02:00.784754288 +0000 UTC m=+1693.554227072" watchObservedRunningTime="2023-06-15 17:02:00.793091306 +0000 UTC m=+1693.562564132"
	Jun 15 17:02:47 addons-477000 kubelet[2256]: E0615 17:02:47.334096    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 17:02:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 17:02:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 17:02:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 15 17:03:47 addons-477000 kubelet[2256]: W0615 17:03:47.316897    2256 machine.go:65] Cannot read vendor id correctly, set empty.
	Jun 15 17:03:47 addons-477000 kubelet[2256]: E0615 17:03:47.330218    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 17:03:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 17:03:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 17:03:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-477000 -n addons-477000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-477000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (720.81s)

                                                
                                    
x
+
TestAddons/parallel/CSI (671.15s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:535: failed waiting for csi-hostpath-driver pods to stabilize: context deadline exceeded
addons_test.go:537: csi-hostpath-driver pods stabilized in 6m0.002514084s
addons_test.go:540: (dbg) Run:  kubectl --context addons-477000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-477000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:546: failed waiting for PVC hpvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-477000 -n addons-477000
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-477000 logs -n 25
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-066000 | jenkins | v1.30.1 | 15 Jun 23 09:32 PDT |                     |
	|         | -p download-only-066000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-066000 | jenkins | v1.30.1 | 15 Jun 23 09:32 PDT |                     |
	|         | -p download-only-066000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT | 15 Jun 23 09:33 PDT |
	| delete  | -p download-only-066000        | download-only-066000 | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT | 15 Jun 23 09:33 PDT |
	| delete  | -p download-only-066000        | download-only-066000 | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT | 15 Jun 23 09:33 PDT |
	| start   | --download-only -p             | binary-mirror-062000 | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT |                     |
	|         | binary-mirror-062000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49310         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-062000        | binary-mirror-062000 | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT | 15 Jun 23 09:33 PDT |
	| start   | -p addons-477000               | addons-477000        | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT | 15 Jun 23 09:40 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-477000        | jenkins | v1.30.1 | 15 Jun 23 09:52 PDT |                     |
	|         | addons-477000                  |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-477000        | jenkins | v1.30.1 | 15 Jun 23 10:01 PDT | 15 Jun 23 10:01 PDT |
	|         | -p addons-477000               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/15 09:33:17
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0615 09:33:17.135905    1397 out.go:296] Setting OutFile to fd 1 ...
	I0615 09:33:17.136030    1397 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 09:33:17.136034    1397 out.go:309] Setting ErrFile to fd 2...
	I0615 09:33:17.136037    1397 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 09:33:17.136120    1397 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 09:33:17.137161    1397 out.go:303] Setting JSON to false
	I0615 09:33:17.152121    1397 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":168,"bootTime":1686846629,"procs":371,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 09:33:17.152202    1397 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 09:33:17.156891    1397 out.go:177] * [addons-477000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 09:33:17.159887    1397 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 09:33:17.163775    1397 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 09:33:17.159984    1397 notify.go:220] Checking for updates...
	I0615 09:33:17.171800    1397 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 09:33:17.174813    1397 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 09:33:17.177828    1397 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 09:33:17.180704    1397 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 09:33:17.183887    1397 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 09:33:17.187819    1397 out.go:177] * Using the qemu2 driver based on user configuration
	I0615 09:33:17.194783    1397 start.go:297] selected driver: qemu2
	I0615 09:33:17.194788    1397 start.go:884] validating driver "qemu2" against <nil>
	I0615 09:33:17.194794    1397 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 09:33:17.196752    1397 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0615 09:33:17.199793    1397 out.go:177] * Automatically selected the socket_vmnet network
	I0615 09:33:17.201307    1397 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 09:33:17.201331    1397 cni.go:84] Creating CNI manager for ""
	I0615 09:33:17.201335    1397 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 09:33:17.201341    1397 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0615 09:33:17.201349    1397 start_flags.go:319] config:
	{Name:addons-477000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-477000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 09:33:17.201434    1397 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 09:33:17.209829    1397 out.go:177] * Starting control plane node addons-477000 in cluster addons-477000
	I0615 09:33:17.213726    1397 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 09:33:17.213749    1397 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 09:33:17.213768    1397 cache.go:57] Caching tarball of preloaded images
	I0615 09:33:17.213824    1397 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 09:33:17.213830    1397 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 09:33:17.214051    1397 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/config.json ...
	I0615 09:33:17.214063    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/config.json: {Name:mkc1c34b82952aae697463d2d78c6ea098445790 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:17.214292    1397 start.go:365] acquiring machines lock for addons-477000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 09:33:17.214399    1397 start.go:369] acquired machines lock for "addons-477000" in 101.583µs
	I0615 09:33:17.214409    1397 start.go:93] Provisioning new machine with config: &{Name:addons-477000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.3 ClusterName:addons-477000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 09:33:17.214436    1397 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 09:33:17.221743    1397 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0615 09:33:17.568031    1397 start.go:159] libmachine.API.Create for "addons-477000" (driver="qemu2")
	I0615 09:33:17.568071    1397 client.go:168] LocalClient.Create starting
	I0615 09:33:17.568226    1397 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 09:33:17.626803    1397 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 09:33:17.737973    1397 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 09:33:17.902563    1397 main.go:141] libmachine: Creating SSH key...
	I0615 09:33:17.968617    1397 main.go:141] libmachine: Creating Disk image...
	I0615 09:33:17.968623    1397 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 09:33:17.969817    1397 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2
	I0615 09:33:18.004891    1397 main.go:141] libmachine: STDOUT: 
	I0615 09:33:18.004923    1397 main.go:141] libmachine: STDERR: 
	I0615 09:33:18.004982    1397 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2 +20000M
	I0615 09:33:18.012411    1397 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 09:33:18.012436    1397 main.go:141] libmachine: STDERR: 
	I0615 09:33:18.012455    1397 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2
	I0615 09:33:18.012466    1397 main.go:141] libmachine: Starting QEMU VM...
	I0615 09:33:18.012501    1397 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:25:cc:0f:2e:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2
	I0615 09:33:18.081537    1397 main.go:141] libmachine: STDOUT: 
	I0615 09:33:18.081558    1397 main.go:141] libmachine: STDERR: 
	I0615 09:33:18.081562    1397 main.go:141] libmachine: Attempt 0
	I0615 09:33:18.081577    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:20.083712    1397 main.go:141] libmachine: Attempt 1
	I0615 09:33:20.083961    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:22.086122    1397 main.go:141] libmachine: Attempt 2
	I0615 09:33:22.086166    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:24.088201    1397 main.go:141] libmachine: Attempt 3
	I0615 09:33:24.088224    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:26.090268    1397 main.go:141] libmachine: Attempt 4
	I0615 09:33:26.090325    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:28.092361    1397 main.go:141] libmachine: Attempt 5
	I0615 09:33:28.092379    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:30.094488    1397 main.go:141] libmachine: Attempt 6
	I0615 09:33:30.094575    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:30.094985    1397 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0615 09:33:30.095099    1397 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:25:cc:f:2e:6f ID:1,1a:25:cc:f:2e:6f Lease:0x648c8ed9}
	I0615 09:33:30.095124    1397 main.go:141] libmachine: Found match: 1a:25:cc:f:2e:6f
	I0615 09:33:30.095168    1397 main.go:141] libmachine: IP: 192.168.105.2
	I0615 09:33:30.095195    1397 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0615 09:33:32.115264    1397 machine.go:88] provisioning docker machine ...
	I0615 09:33:32.115338    1397 buildroot.go:166] provisioning hostname "addons-477000"
	I0615 09:33:32.116828    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.117588    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.117607    1397 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-477000 && echo "addons-477000" | sudo tee /etc/hostname
	I0615 09:33:32.199158    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-477000
	
	I0615 09:33:32.199283    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.199748    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.199763    1397 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-477000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-477000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-477000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0615 09:33:32.260846    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0615 09:33:32.260864    1397 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16718-868/.minikube CaCertPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16718-868/.minikube}
	I0615 09:33:32.260878    1397 buildroot.go:174] setting up certificates
	I0615 09:33:32.260906    1397 provision.go:83] configureAuth start
	I0615 09:33:32.260912    1397 provision.go:138] copyHostCerts
	I0615 09:33:32.261103    1397 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16718-868/.minikube/ca.pem (1078 bytes)
	I0615 09:33:32.261436    1397 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16718-868/.minikube/cert.pem (1123 bytes)
	I0615 09:33:32.262101    1397 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16718-868/.minikube/key.pem (1679 bytes)
	I0615 09:33:32.262442    1397 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16718-868/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca-key.pem org=jenkins.addons-477000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-477000]
	I0615 09:33:32.306279    1397 provision.go:172] copyRemoteCerts
	I0615 09:33:32.306343    1397 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0615 09:33:32.306360    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:33:32.335305    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0615 09:33:32.343471    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0615 09:33:32.351180    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0615 09:33:32.358490    1397 provision.go:86] duration metric: configureAuth took 97.576167ms
	I0615 09:33:32.358498    1397 buildroot.go:189] setting minikube options for container-runtime
	I0615 09:33:32.358950    1397 config.go:182] Loaded profile config "addons-477000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 09:33:32.358995    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.359216    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.359220    1397 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0615 09:33:32.410196    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0615 09:33:32.410204    1397 buildroot.go:70] root file system type: tmpfs
	I0615 09:33:32.410261    1397 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0615 09:33:32.410301    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.410550    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.410587    1397 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0615 09:33:32.468329    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0615 09:33:32.468380    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.468634    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.468643    1397 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0615 09:33:32.794674    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0615 09:33:32.794698    1397 machine.go:91] provisioned docker machine in 679.423792ms
	I0615 09:33:32.794704    1397 client.go:171] LocalClient.Create took 15.226996125s
	I0615 09:33:32.794723    1397 start.go:167] duration metric: libmachine.API.Create for "addons-477000" took 15.227064791s
	I0615 09:33:32.794726    1397 start.go:300] post-start starting for "addons-477000" (driver="qemu2")
	I0615 09:33:32.794731    1397 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0615 09:33:32.794812    1397 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0615 09:33:32.794822    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:33:32.823879    1397 ssh_runner.go:195] Run: cat /etc/os-release
	I0615 09:33:32.825122    1397 info.go:137] Remote host: Buildroot 2021.02.12
	I0615 09:33:32.825128    1397 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16718-868/.minikube/addons for local assets ...
	I0615 09:33:32.825196    1397 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16718-868/.minikube/files for local assets ...
	I0615 09:33:32.825222    1397 start.go:303] post-start completed in 30.494125ms
	I0615 09:33:32.825555    1397 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/config.json ...
	I0615 09:33:32.825707    1397 start.go:128] duration metric: createHost completed in 15.611646375s
	I0615 09:33:32.825734    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.825947    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.825951    1397 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0615 09:33:32.876753    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: 1686846813.320713543
	
	I0615 09:33:32.876760    1397 fix.go:206] guest clock: 1686846813.320713543
	I0615 09:33:32.876764    1397 fix.go:219] Guest: 2023-06-15 09:33:33.320713543 -0700 PDT Remote: 2023-06-15 09:33:32.825711 -0700 PDT m=+15.708594751 (delta=495.002543ms)
	I0615 09:33:32.876775    1397 fix.go:190] guest clock delta is within tolerance: 495.002543ms
	I0615 09:33:32.876778    1397 start.go:83] releasing machines lock for "addons-477000", held for 15.662753208s
	I0615 09:33:32.877060    1397 ssh_runner.go:195] Run: cat /version.json
	I0615 09:33:32.877067    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:33:32.877085    1397 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0615 09:33:32.877121    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:33:32.951587    1397 ssh_runner.go:195] Run: systemctl --version
	I0615 09:33:32.953983    1397 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0615 09:33:32.956008    1397 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0615 09:33:32.956040    1397 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0615 09:33:32.961754    1397 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0615 09:33:32.961761    1397 start.go:466] detecting cgroup driver to use...
	I0615 09:33:32.961877    1397 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0615 09:33:32.967359    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0615 09:33:32.970783    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0615 09:33:32.973872    1397 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0615 09:33:32.973908    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0615 09:33:32.976794    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0615 09:33:32.979871    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0615 09:33:32.983273    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0615 09:33:32.986847    1397 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0615 09:33:32.990009    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0615 09:33:32.992910    1397 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0615 09:33:32.995885    1397 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0615 09:33:32.999181    1397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 09:33:33.082046    1397 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0615 09:33:33.090305    1397 start.go:466] detecting cgroup driver to use...
	I0615 09:33:33.090367    1397 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0615 09:33:33.095444    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0615 09:33:33.099628    1397 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0615 09:33:33.106008    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0615 09:33:33.110583    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0615 09:33:33.115305    1397 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0615 09:33:33.157221    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0615 09:33:33.165685    1397 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0615 09:33:33.171700    1397 ssh_runner.go:195] Run: which cri-dockerd
	I0615 09:33:33.173347    1397 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0615 09:33:33.176671    1397 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0615 09:33:33.184036    1397 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0615 09:33:33.256172    1397 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0615 09:33:33.326477    1397 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0615 09:33:33.326492    1397 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0615 09:33:33.331797    1397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 09:33:33.394602    1397 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0615 09:33:34.551420    1397 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.1568305s)
	I0615 09:33:34.551480    1397 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0615 09:33:34.614918    1397 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0615 09:33:34.680379    1397 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0615 09:33:34.741670    1397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 09:33:34.802995    1397 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0615 09:33:34.810702    1397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 09:33:34.876193    1397 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0615 09:33:34.899281    1397 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0615 09:33:34.899375    1397 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0615 09:33:34.902005    1397 start.go:534] Will wait 60s for crictl version
	I0615 09:33:34.902039    1397 ssh_runner.go:195] Run: which crictl
	I0615 09:33:34.903665    1397 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0615 09:33:34.922827    1397 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0615 09:33:34.922910    1397 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0615 09:33:34.936535    1397 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0615 09:33:34.948006    1397 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.2 ...
	I0615 09:33:34.948101    1397 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0615 09:33:34.949468    1397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0615 09:33:34.953059    1397 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 09:33:34.953103    1397 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0615 09:33:34.958178    1397 docker.go:636] Got preloaded images: 
	I0615 09:33:34.958185    1397 docker.go:642] registry.k8s.io/kube-apiserver:v1.27.3 wasn't preloaded
	I0615 09:33:34.958223    1397 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0615 09:33:34.960919    1397 ssh_runner.go:195] Run: which lz4
	I0615 09:33:34.962156    1397 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0615 09:33:34.963566    1397 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0615 09:33:34.963580    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635922 bytes)
	I0615 09:33:36.258372    1397 docker.go:600] Took 1.296282 seconds to copy over tarball
	I0615 09:33:36.258440    1397 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0615 09:33:37.363016    1397 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.104582708s)
	I0615 09:33:37.363034    1397 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0615 09:33:37.379849    1397 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0615 09:33:37.383479    1397 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0615 09:33:37.388752    1397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 09:33:37.449408    1397 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0615 09:33:38.998025    1397 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.548624125s)
	I0615 09:33:38.998130    1397 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0615 09:33:39.004063    1397 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0615 09:33:39.004075    1397 cache_images.go:84] Images are preloaded, skipping loading
	I0615 09:33:39.004149    1397 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0615 09:33:39.011968    1397 cni.go:84] Creating CNI manager for ""
	I0615 09:33:39.011977    1397 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 09:33:39.012000    1397 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0615 09:33:39.012011    1397 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-477000 NodeName:addons-477000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0615 09:33:39.012111    1397 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-477000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0615 09:33:39.012156    1397 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-477000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:addons-477000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0615 09:33:39.012203    1397 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0615 09:33:39.015681    1397 binaries.go:44] Found k8s binaries, skipping transfer
	I0615 09:33:39.015718    1397 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0615 09:33:39.018849    1397 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0615 09:33:39.023920    1397 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0615 09:33:39.028733    1397 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0615 09:33:39.033571    1397 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0615 09:33:39.034818    1397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0615 09:33:39.038909    1397 certs.go:56] Setting up /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000 for IP: 192.168.105.2
	I0615 09:33:39.038918    1397 certs.go:190] acquiring lock for shared ca certs: {Name:mk9ee4d7ca68f2cc32c8609d33f6ce33c43a91d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.039073    1397 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key
	I0615 09:33:39.109209    1397 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt ...
	I0615 09:33:39.109214    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt: {Name:mka7538e8370ad0560f47e28d206b077e2dbbef8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.109425    1397 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key ...
	I0615 09:33:39.109428    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key: {Name:mkca6c7de675216938ac1a6663738af412e2d280 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.109532    1397 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key
	I0615 09:33:39.219574    1397 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.crt ...
	I0615 09:33:39.219577    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.crt: {Name:mk21a595039c96735254391e5270364a73e52306 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.219709    1397 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key ...
	I0615 09:33:39.219712    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key: {Name:mk96cab9f1987887c2b313cd365bdba518ec818b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.219826    1397 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.key
	I0615 09:33:39.219831    1397 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt with IP's: []
	I0615 09:33:39.435828    1397 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt ...
	I0615 09:33:39.435835    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt: {Name:mk0f2105a4c5fdba007e9c77c7945365dc3f96af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.436029    1397 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.key ...
	I0615 09:33:39.436031    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.key: {Name:mk65e491f4b4c1ee8d05045efb9265b2c697a551 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.436124    1397 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key.96055969
	I0615 09:33:39.436133    1397 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0615 09:33:39.510125    1397 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt.96055969 ...
	I0615 09:33:39.510129    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt.96055969: {Name:mk7c90d062166950585957cb3f0ce136594c9cd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.510277    1397 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key.96055969 ...
	I0615 09:33:39.510280    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key.96055969: {Name:mk4266445b8f2d5bc078d169ee24b8765955e2f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.510384    1397 certs.go:337] copying /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt
	I0615 09:33:39.510598    1397 certs.go:341] copying /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key
	I0615 09:33:39.510713    1397 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.key
	I0615 09:33:39.510723    1397 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.crt with IP's: []
	I0615 09:33:39.610633    1397 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.crt ...
	I0615 09:33:39.610637    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.crt: {Name:mk04e06c13fe3eccffb62f328096a02f5668baa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.610779    1397 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.key ...
	I0615 09:33:39.610783    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.key: {Name:mk21e3c8e84fcac9a2d9da5e0fa06b26ad1ee7cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.611042    1397 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca-key.pem (1679 bytes)
	I0615 09:33:39.611072    1397 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem (1078 bytes)
	I0615 09:33:39.611094    1397 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem (1123 bytes)
	I0615 09:33:39.611429    1397 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/key.pem (1679 bytes)
	I0615 09:33:39.611960    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0615 09:33:39.619546    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0615 09:33:39.626711    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0615 09:33:39.633501    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0615 09:33:39.640010    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0615 09:33:39.647112    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0615 09:33:39.654063    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0615 09:33:39.660582    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0615 09:33:39.667533    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0615 09:33:39.674545    1397 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0615 09:33:39.680339    1397 ssh_runner.go:195] Run: openssl version
	I0615 09:33:39.682379    1397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0615 09:33:39.685404    1397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0615 09:33:39.686832    1397 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 15 16:33 /usr/share/ca-certificates/minikubeCA.pem
	I0615 09:33:39.686855    1397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0615 09:33:39.688703    1397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0615 09:33:39.691957    1397 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0615 09:33:39.693381    1397 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0615 09:33:39.693418    1397 kubeadm.go:404] StartCluster: {Name:addons-477000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.3 ClusterName:addons-477000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 09:33:39.693485    1397 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0615 09:33:39.699291    1397 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0615 09:33:39.702928    1397 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0615 09:33:39.706168    1397 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0615 09:33:39.708902    1397 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0615 09:33:39.708925    1397 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0615 09:33:39.731022    1397 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0615 09:33:39.731051    1397 kubeadm.go:322] [preflight] Running pre-flight checks
	I0615 09:33:39.787198    1397 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0615 09:33:39.787252    1397 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0615 09:33:39.787291    1397 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0615 09:33:39.845524    1397 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0615 09:33:39.853720    1397 out.go:204]   - Generating certificates and keys ...
	I0615 09:33:39.853771    1397 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0615 09:33:39.853800    1397 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0615 09:33:40.047052    1397 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0615 09:33:40.281668    1397 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0615 09:33:40.373604    1397 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0615 09:33:40.496002    1397 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0615 09:33:40.752895    1397 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0615 09:33:40.752975    1397 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-477000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0615 09:33:40.889354    1397 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0615 09:33:40.889424    1397 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-477000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0615 09:33:40.967392    1397 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0615 09:33:41.132487    1397 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0615 09:33:41.175551    1397 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0615 09:33:41.175583    1397 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0615 09:33:41.275708    1397 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0615 09:33:41.313261    1397 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0615 09:33:41.394612    1397 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0615 09:33:41.488793    1397 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0615 09:33:41.495623    1397 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0615 09:33:41.495672    1397 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0615 09:33:41.495691    1397 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0615 09:33:41.565044    1397 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0615 09:33:41.570236    1397 out.go:204]   - Booting up control plane ...
	I0615 09:33:41.570302    1397 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0615 09:33:41.570344    1397 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0615 09:33:41.570389    1397 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0615 09:33:41.570430    1397 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0615 09:33:41.570514    1397 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0615 09:33:45.571765    1397 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.003295 seconds
	I0615 09:33:45.571857    1397 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0615 09:33:45.577408    1397 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0615 09:33:46.094757    1397 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0615 09:33:46.095006    1397 kubeadm.go:322] [mark-control-plane] Marking the node addons-477000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0615 09:33:46.613385    1397 kubeadm.go:322] [bootstrap-token] Using token: f4kg8y.q60xaa2tn5uwspbb
	I0615 09:33:46.619341    1397 out.go:204]   - Configuring RBAC rules ...
	I0615 09:33:46.619403    1397 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0615 09:33:46.620813    1397 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0615 09:33:46.624663    1397 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0615 09:33:46.625913    1397 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0615 09:33:46.627185    1397 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0615 09:33:46.628306    1397 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0615 09:33:46.632523    1397 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0615 09:33:46.806353    1397 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0615 09:33:47.022461    1397 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0615 09:33:47.022733    1397 kubeadm.go:322] 
	I0615 09:33:47.022774    1397 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0615 09:33:47.022780    1397 kubeadm.go:322] 
	I0615 09:33:47.022834    1397 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0615 09:33:47.022839    1397 kubeadm.go:322] 
	I0615 09:33:47.022851    1397 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0615 09:33:47.022879    1397 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0615 09:33:47.022912    1397 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0615 09:33:47.022916    1397 kubeadm.go:322] 
	I0615 09:33:47.022952    1397 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0615 09:33:47.022958    1397 kubeadm.go:322] 
	I0615 09:33:47.022992    1397 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0615 09:33:47.022995    1397 kubeadm.go:322] 
	I0615 09:33:47.023016    1397 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0615 09:33:47.023050    1397 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0615 09:33:47.023081    1397 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0615 09:33:47.023083    1397 kubeadm.go:322] 
	I0615 09:33:47.023121    1397 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0615 09:33:47.023158    1397 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0615 09:33:47.023161    1397 kubeadm.go:322] 
	I0615 09:33:47.023197    1397 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token f4kg8y.q60xaa2tn5uwspbb \
	I0615 09:33:47.023261    1397 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:13ec1e4f42825d0267b189c0f8b830eda54e60d63681b653b209256c58602b59 \
	I0615 09:33:47.023273    1397 kubeadm.go:322] 	--control-plane 
	I0615 09:33:47.023278    1397 kubeadm.go:322] 
	I0615 09:33:47.023320    1397 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0615 09:33:47.023326    1397 kubeadm.go:322] 
	I0615 09:33:47.023380    1397 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token f4kg8y.q60xaa2tn5uwspbb \
	I0615 09:33:47.023443    1397 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:13ec1e4f42825d0267b189c0f8b830eda54e60d63681b653b209256c58602b59 
	I0615 09:33:47.023525    1397 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0615 09:33:47.023586    1397 cni.go:84] Creating CNI manager for ""
	I0615 09:33:47.023594    1397 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 09:33:47.031274    1397 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0615 09:33:47.035321    1397 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0615 09:33:47.038799    1397 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0615 09:33:47.043709    1397 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0615 09:33:47.043747    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:47.043800    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=1814abc5e40384accb8747bfb7e33027343c9627 minikube.k8s.io/name=addons-477000 minikube.k8s.io/updated_at=2023_06_15T09_33_47_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:47.095849    1397 ops.go:34] apiserver oom_adj: -16
	I0615 09:33:47.095898    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:47.645147    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:48.145079    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:48.645093    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:49.145044    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:49.645148    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:50.145134    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:50.645328    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:51.145310    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:51.645116    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:52.144609    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:52.645278    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:53.145243    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:53.645239    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:54.145000    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:54.644744    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:55.145233    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:55.644949    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:56.145008    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:56.644938    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:57.143430    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:57.645224    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:58.144898    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:58.644909    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:59.144773    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:59.644338    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:34:00.144834    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:34:00.206574    1397 kubeadm.go:1081] duration metric: took 13.163176875s to wait for elevateKubeSystemPrivileges.
	I0615 09:34:00.206587    1397 kubeadm.go:406] StartCluster complete in 20.513668625s
	I0615 09:34:00.206614    1397 settings.go:142] acquiring lock: {Name:mk45a698fcd8dd8ae6984c9cf4ad4d183fdb5424 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:34:00.206769    1397 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 09:34:00.206961    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/kubeconfig: {Name:mkbe9cac04fb467055323f2e3d5db2c6ddc287ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:34:00.207185    1397 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0615 09:34:00.207249    1397 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0615 09:34:00.207317    1397 addons.go:66] Setting ingress=true in profile "addons-477000"
	I0615 09:34:00.207322    1397 addons.go:66] Setting ingress-dns=true in profile "addons-477000"
	I0615 09:34:00.207325    1397 addons.go:228] Setting addon ingress=true in "addons-477000"
	I0615 09:34:00.207327    1397 addons.go:228] Setting addon ingress-dns=true in "addons-477000"
	I0615 09:34:00.207355    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207358    1397 addons.go:66] Setting cloud-spanner=true in profile "addons-477000"
	I0615 09:34:00.207362    1397 addons.go:228] Setting addon cloud-spanner=true in "addons-477000"
	I0615 09:34:00.207371    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207402    1397 addons.go:66] Setting metrics-server=true in profile "addons-477000"
	I0615 09:34:00.207421    1397 addons.go:66] Setting registry=true in profile "addons-477000"
	I0615 09:34:00.207457    1397 addons.go:228] Setting addon registry=true in "addons-477000"
	I0615 09:34:00.207434    1397 addons.go:66] Setting inspektor-gadget=true in profile "addons-477000"
	I0615 09:34:00.207483    1397 addons.go:228] Setting addon inspektor-gadget=true in "addons-477000"
	I0615 09:34:00.207494    1397 addons.go:228] Setting addon metrics-server=true in "addons-477000"
	I0615 09:34:00.207502    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207355    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207433    1397 addons.go:66] Setting default-storageclass=true in profile "addons-477000"
	I0615 09:34:00.207531    1397 config.go:182] Loaded profile config "addons-477000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 09:34:00.207537    1397 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-477000"
	I0615 09:34:00.207575    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207475    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207436    1397 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-477000"
	I0615 09:34:00.207676    1397 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-477000"
	I0615 09:34:00.207687    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207317    1397 addons.go:66] Setting volumesnapshots=true in profile "addons-477000"
	I0615 09:34:00.207735    1397 addons.go:228] Setting addon volumesnapshots=true in "addons-477000"
	I0615 09:34:00.207746    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207438    1397 addons.go:66] Setting gcp-auth=true in profile "addons-477000"
	I0615 09:34:00.207776    1397 mustload.go:65] Loading cluster: addons-477000
	I0615 09:34:00.207433    1397 addons.go:66] Setting storage-provisioner=true in profile "addons-477000"
	I0615 09:34:00.208143    1397 addons.go:228] Setting addon storage-provisioner=true in "addons-477000"
	I0615 09:34:00.208157    1397 host.go:66] Checking if "addons-477000" exists ...
	W0615 09:34:00.208299    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208325    1397 addons.go:274] "addons-477000" is not running, setting registry=true and skipping enablement (err=<nil>)
	W0615 09:34:00.208331    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208338    1397 addons.go:274] "addons-477000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	I0615 09:34:00.208333    1397 addons.go:464] Verifying addon registry=true in "addons-477000"
	I0615 09:34:00.211629    1397 out.go:177] * Verifying registry addon...
	W0615 09:34:00.208373    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	I0615 09:34:00.207993    1397 config.go:182] Loaded profile config "addons-477000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	W0615 09:34:00.208397    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208133    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208437    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208601    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208662    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	I0615 09:34:00.215730    1397 addons.go:228] Setting addon default-storageclass=true in "addons-477000"
	W0615 09:34:00.218550    1397 addons.go:274] "addons-477000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	W0615 09:34:00.218559    1397 addons.go:274] "addons-477000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	W0615 09:34:00.218576    1397 addons.go:274] "addons-477000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	W0615 09:34:00.218576    1397 addons.go:274] "addons-477000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	W0615 09:34:00.218580    1397 addons.go:274] "addons-477000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	W0615 09:34:00.218593    1397 addons.go:274] "addons-477000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	I0615 09:34:00.218944    1397 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0615 09:34:00.219273    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.221576    1397 addons.go:464] Verifying addon metrics-server=true in "addons-477000"
	I0615 09:34:00.221583    1397 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0615 09:34:00.224623    1397 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0615 09:34:00.224631    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0615 09:34:00.224638    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:34:00.221629    1397 addons.go:464] Verifying addon ingress=true in "addons-477000"
	I0615 09:34:00.221636    1397 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-477000"
	I0615 09:34:00.221723    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.228543    1397 out.go:177] * Verifying ingress addon...
	I0615 09:34:00.238557    1397 out.go:177] * Verifying csi-hostpath-driver addon...
	I0615 09:34:00.229248    1397 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0615 09:34:00.235988    1397 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0615 09:34:00.241352    1397 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0615 09:34:00.242560    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0615 09:34:00.242594    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:34:00.242973    1397 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0615 09:34:00.245385    1397 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0615 09:34:00.251897    1397 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0615 09:34:00.275682    1397 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0615 09:34:00.278596    1397 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0615 09:34:00.278604    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0615 09:34:00.310787    1397 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0615 09:34:00.310799    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0615 09:34:00.340750    1397 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0615 09:34:00.340763    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0615 09:34:00.370147    1397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0615 09:34:00.374756    1397 addons.go:420] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0615 09:34:00.374766    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0615 09:34:00.391483    1397 addons.go:420] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0615 09:34:00.391493    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0615 09:34:00.396735    1397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0615 09:34:00.725129    1397 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-477000" context rescaled to 1 replicas
	I0615 09:34:00.725155    1397 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 09:34:00.731827    1397 out.go:177] * Verifying Kubernetes components...
	I0615 09:34:00.735986    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0615 09:34:01.130555    1397 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	W0615 09:34:01.273320    1397 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0615 09:34:01.273347    1397 retry.go:31] will retry after 358.412085ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0615 09:34:01.273795    1397 node_ready.go:35] waiting up to 6m0s for node "addons-477000" to be "Ready" ...
	I0615 09:34:01.275272    1397 node_ready.go:49] node "addons-477000" has status "Ready":"True"
	I0615 09:34:01.275281    1397 node_ready.go:38] duration metric: took 1.477792ms waiting for node "addons-477000" to be "Ready" ...
	I0615 09:34:01.275284    1397 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0615 09:34:01.279498    1397 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-c5j9m" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:01.633151    1397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0615 09:34:02.299497    1397 pod_ready.go:92] pod "coredns-5d78c9869d-c5j9m" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.299518    1397 pod_ready.go:81] duration metric: took 1.020034208s waiting for pod "coredns-5d78c9869d-c5j9m" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.299526    1397 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-mds5s" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.303736    1397 pod_ready.go:92] pod "coredns-5d78c9869d-mds5s" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.303743    1397 pod_ready.go:81] duration metric: took 4.212458ms waiting for pod "coredns-5d78c9869d-mds5s" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.303749    1397 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.307230    1397 pod_ready.go:92] pod "etcd-addons-477000" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.307237    1397 pod_ready.go:81] duration metric: took 3.484042ms waiting for pod "etcd-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.307243    1397 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.311004    1397 pod_ready.go:92] pod "kube-apiserver-addons-477000" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.311013    1397 pod_ready.go:81] duration metric: took 3.766916ms waiting for pod "kube-apiserver-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.311019    1397 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.481809    1397 pod_ready.go:92] pod "kube-controller-manager-addons-477000" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.481828    1397 pod_ready.go:81] duration metric: took 170.807958ms waiting for pod "kube-controller-manager-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.481838    1397 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8rgcs" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.883885    1397 pod_ready.go:92] pod "kube-proxy-8rgcs" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.883919    1397 pod_ready.go:81] duration metric: took 402.082375ms waiting for pod "kube-proxy-8rgcs" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.883933    1397 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:03.277736    1397 pod_ready.go:92] pod "kube-scheduler-addons-477000" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:03.277748    1397 pod_ready.go:81] duration metric: took 393.817875ms waiting for pod "kube-scheduler-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:03.277754    1397 pod_ready.go:38] duration metric: took 2.002511417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0615 09:34:03.277768    1397 api_server.go:52] waiting for apiserver process to appear ...
	I0615 09:34:03.277845    1397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0615 09:34:03.986804    1397 api_server.go:72] duration metric: took 3.261712416s to wait for apiserver process to appear ...
	I0615 09:34:03.986816    1397 api_server.go:88] waiting for apiserver healthz status ...
	I0615 09:34:03.986824    1397 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0615 09:34:03.986882    1397 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.353767917s)
	I0615 09:34:03.990093    1397 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0615 09:34:03.990734    1397 api_server.go:141] control plane version: v1.27.3
	I0615 09:34:03.990742    1397 api_server.go:131] duration metric: took 3.923291ms to wait for apiserver health ...
	I0615 09:34:03.990745    1397 system_pods.go:43] waiting for kube-system pods to appear ...
	I0615 09:34:03.993833    1397 system_pods.go:59] 9 kube-system pods found
	I0615 09:34:03.993840    1397 system_pods.go:61] "coredns-5d78c9869d-c5j9m" [029d925c-5975-4103-a809-e22de9cdf1be] Running
	I0615 09:34:03.993843    1397 system_pods.go:61] "coredns-5d78c9869d-mds5s" [6851e0cd-a4cb-4c11-8b37-6e07dedcb90e] Running
	I0615 09:34:03.993845    1397 system_pods.go:61] "etcd-addons-477000" [41610910-d77e-4a7d-8f06-bcf205880f06] Running
	I0615 09:34:03.993848    1397 system_pods.go:61] "kube-apiserver-addons-477000" [49f23173-0ac0-4711-9c55-a96c00aa0881] Running
	I0615 09:34:03.993851    1397 system_pods.go:61] "kube-controller-manager-addons-477000" [befaa390-5c21-4b4a-930f-d4c4c3559d3d] Running
	I0615 09:34:03.993853    1397 system_pods.go:61] "kube-proxy-8rgcs" [6b460d21-5a79-40c6-84a1-e0551f8e91b9] Running
	I0615 09:34:03.993855    1397 system_pods.go:61] "kube-scheduler-addons-477000" [995baab0-4f21-4690-baf3-4c4682320ca6] Running
	I0615 09:34:03.993859    1397 system_pods.go:61] "snapshot-controller-75bbb956b9-p6hk4" [f6c07a51-e0a4-4ed6-a2fb-f927be1aa9f7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0615 09:34:03.993864    1397 system_pods.go:61] "snapshot-controller-75bbb956b9-prqv8" [368ee712-d2ca-4c1b-9d87-bffa3c354d7c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0615 09:34:03.993866    1397 system_pods.go:74] duration metric: took 3.119166ms to wait for pod list to return data ...
	I0615 09:34:03.993869    1397 default_sa.go:34] waiting for default service account to be created ...
	I0615 09:34:03.995049    1397 default_sa.go:45] found service account: "default"
	I0615 09:34:03.995055    1397 default_sa.go:55] duration metric: took 1.183708ms for default service account to be created ...
	I0615 09:34:03.995057    1397 system_pods.go:116] waiting for k8s-apps to be running ...
	I0615 09:34:03.998400    1397 system_pods.go:86] 9 kube-system pods found
	I0615 09:34:03.998409    1397 system_pods.go:89] "coredns-5d78c9869d-c5j9m" [029d925c-5975-4103-a809-e22de9cdf1be] Running
	I0615 09:34:03.998411    1397 system_pods.go:89] "coredns-5d78c9869d-mds5s" [6851e0cd-a4cb-4c11-8b37-6e07dedcb90e] Running
	I0615 09:34:03.998414    1397 system_pods.go:89] "etcd-addons-477000" [41610910-d77e-4a7d-8f06-bcf205880f06] Running
	I0615 09:34:03.998416    1397 system_pods.go:89] "kube-apiserver-addons-477000" [49f23173-0ac0-4711-9c55-a96c00aa0881] Running
	I0615 09:34:03.998419    1397 system_pods.go:89] "kube-controller-manager-addons-477000" [befaa390-5c21-4b4a-930f-d4c4c3559d3d] Running
	I0615 09:34:03.998421    1397 system_pods.go:89] "kube-proxy-8rgcs" [6b460d21-5a79-40c6-84a1-e0551f8e91b9] Running
	I0615 09:34:03.998424    1397 system_pods.go:89] "kube-scheduler-addons-477000" [995baab0-4f21-4690-baf3-4c4682320ca6] Running
	I0615 09:34:03.998429    1397 system_pods.go:89] "snapshot-controller-75bbb956b9-p6hk4" [f6c07a51-e0a4-4ed6-a2fb-f927be1aa9f7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0615 09:34:03.998433    1397 system_pods.go:89] "snapshot-controller-75bbb956b9-prqv8" [368ee712-d2ca-4c1b-9d87-bffa3c354d7c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0615 09:34:03.998436    1397 system_pods.go:126] duration metric: took 3.376208ms to wait for k8s-apps to be running ...
	I0615 09:34:03.998439    1397 system_svc.go:44] waiting for kubelet service to be running ....
	I0615 09:34:03.998489    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0615 09:34:04.003913    1397 system_svc.go:56] duration metric: took 5.471458ms WaitForService to wait for kubelet.
	I0615 09:34:04.003921    1397 kubeadm.go:581] duration metric: took 3.278833625s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0615 09:34:04.003932    1397 node_conditions.go:102] verifying NodePressure condition ...
	I0615 09:34:04.077208    1397 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0615 09:34:04.077239    1397 node_conditions.go:123] node cpu capacity is 2
	I0615 09:34:04.077244    1397 node_conditions.go:105] duration metric: took 73.311333ms to run NodePressure ...
	I0615 09:34:04.077249    1397 start.go:228] waiting for startup goroutines ...
	I0615 09:34:06.831960    1397 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0615 09:34:06.832053    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:34:06.882622    1397 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0615 09:34:06.891297    1397 addons.go:228] Setting addon gcp-auth=true in "addons-477000"
	I0615 09:34:06.891339    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:06.892599    1397 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0615 09:34:06.892612    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:34:06.928262    1397 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0615 09:34:06.932997    1397 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0615 09:34:06.937187    1397 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0615 09:34:06.937194    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0615 09:34:06.943495    1397 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0615 09:34:06.943502    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0615 09:34:06.949337    1397 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0615 09:34:06.949343    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0615 09:34:06.954968    1397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0615 09:34:07.475178    1397 addons.go:464] Verifying addon gcp-auth=true in "addons-477000"
	I0615 09:34:07.478304    1397 out.go:177] * Verifying gcp-auth addon...
	I0615 09:34:07.485666    1397 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0615 09:34:07.491991    1397 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0615 09:34:07.492002    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:07.996710    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:08.496921    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:09.002133    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:09.494606    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:09.995080    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:10.495704    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:10.995530    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:11.495877    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:12.001470    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:12.497446    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:13.001473    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:13.502268    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:13.997362    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:14.503184    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:14.997798    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:15.495991    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:16.000278    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:16.501895    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:17.001719    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:17.495416    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:17.995757    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:18.496835    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:19.002478    1397 kapi.go:107] duration metric: took 11.51706925s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0615 09:34:19.008171    1397 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-477000 cluster.
	I0615 09:34:19.011889    1397 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0615 09:34:19.016120    1397 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0615 09:40:00.215049    1397 kapi.go:107] duration metric: took 6m0.00478975s to wait for kubernetes.io/minikube-addons=registry ...
	W0615 09:40:00.215455    1397 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0615 09:40:00.235888    1397 kapi.go:107] duration metric: took 6m0.001641708s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0615 09:40:00.235937    1397 kapi.go:107] duration metric: took 6m0.008679083s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0615 09:40:00.236028    1397 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	W0615 09:40:00.236088    1397 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0615 09:40:00.243957    1397 out.go:177] * Enabled addons: inspektor-gadget, metrics-server, cloud-spanner, ingress-dns, storage-provisioner, default-storageclass, volumesnapshots, gcp-auth
	I0615 09:40:00.250970    1397 addons.go:499] enable addons completed in 6m0.052449292s: enabled=[inspektor-gadget metrics-server cloud-spanner ingress-dns storage-provisioner default-storageclass volumesnapshots gcp-auth]
	I0615 09:40:00.251042    1397 start.go:233] waiting for cluster config update ...
	I0615 09:40:00.251069    1397 start.go:242] writing updated cluster config ...
	I0615 09:40:00.255738    1397 ssh_runner.go:195] Run: rm -f paused
	I0615 09:40:00.403218    1397 start.go:582] kubectl: 1.25.9, cluster: 1.27.3 (minor skew: 2)
	I0615 09:40:00.405982    1397 out.go:177] 
	W0615 09:40:00.410033    1397 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.3.
	I0615 09:40:00.413868    1397 out.go:177]   - Want kubectl v1.27.3? Try 'minikube kubectl -- get pods -A'
	I0615 09:40:00.421952    1397 out.go:177] * Done! kubectl is now configured to use "addons-477000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-06-15 16:33:29 UTC, ends at Thu 2023-06-15 17:13:17 UTC. --
	Jun 15 16:34:14 addons-477000 dockerd[1097]: time="2023-06-15T16:34:14.817442834Z" level=info msg="shim disconnected" id=08da339f6e51c0a36beb905cf51aaa6ebe12eed98abda59168295fcb62adf9d0 namespace=moby
	Jun 15 16:34:14 addons-477000 dockerd[1097]: time="2023-06-15T16:34:14.819525140Z" level=warning msg="cleaning up after shim disconnected" id=08da339f6e51c0a36beb905cf51aaa6ebe12eed98abda59168295fcb62adf9d0 namespace=moby
	Jun 15 16:34:14 addons-477000 dockerd[1097]: time="2023-06-15T16:34:14.819574276Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 15 16:34:15 addons-477000 dockerd[1097]: time="2023-06-15T16:34:15.579995441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 15 16:34:15 addons-477000 dockerd[1097]: time="2023-06-15T16:34:15.580279599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 16:34:15 addons-477000 dockerd[1097]: time="2023-06-15T16:34:15.580317285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 15 16:34:15 addons-477000 dockerd[1097]: time="2023-06-15T16:34:15.580357122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 16:34:15 addons-477000 cri-dockerd[991]: time="2023-06-15T16:34:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0c316c00ee755585c1753e0f1d6364e1731871da5d072484c67c43cac67cd349/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 15 16:34:15 addons-477000 dockerd[1091]: time="2023-06-15T16:34:15.926803390Z" level=warning msg="reference for unknown type: " digest="sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Jun 15 16:34:18 addons-477000 cri-dockerd[991]: time="2023-06-15T16:34:18Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Jun 15 16:34:18 addons-477000 dockerd[1097]: time="2023-06-15T16:34:18.269480431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 15 16:34:18 addons-477000 dockerd[1097]: time="2023-06-15T16:34:18.269881621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 16:34:18 addons-477000 dockerd[1097]: time="2023-06-15T16:34:18.269894061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 15 16:34:18 addons-477000 dockerd[1097]: time="2023-06-15T16:34:18.269898788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 17:01:55 addons-477000 dockerd[1097]: time="2023-06-15T17:01:55.073287389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 15 17:01:55 addons-477000 dockerd[1097]: time="2023-06-15T17:01:55.073319639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 17:01:55 addons-477000 dockerd[1097]: time="2023-06-15T17:01:55.073330764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 15 17:01:55 addons-477000 dockerd[1097]: time="2023-06-15T17:01:55.073337722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 17:01:55 addons-477000 cri-dockerd[991]: time="2023-06-15T17:01:55Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ef7c71126455ea790a0991802fb95ede312f12f8ea91a16a91dba330edd51c13/resolv.conf as [nameserver 10.96.0.10 search headlamp.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 15 17:01:55 addons-477000 dockerd[1091]: time="2023-06-15T17:01:55.472931913Z" level=warning msg="reference for unknown type: " digest="sha256:9c33d03e6032adc2ae0920bbda10f82aa223d796f99cfb9509608cd389a157be" remote="ghcr.io/headlamp-k8s/headlamp@sha256:9c33d03e6032adc2ae0920bbda10f82aa223d796f99cfb9509608cd389a157be"
	Jun 15 17:02:00 addons-477000 cri-dockerd[991]: time="2023-06-15T17:02:00Z" level=info msg="Stop pulling image ghcr.io/headlamp-k8s/headlamp:v0.17.1@sha256:9c33d03e6032adc2ae0920bbda10f82aa223d796f99cfb9509608cd389a157be: Status: Downloaded newer image for ghcr.io/headlamp-k8s/headlamp@sha256:9c33d03e6032adc2ae0920bbda10f82aa223d796f99cfb9509608cd389a157be"
	Jun 15 17:02:00 addons-477000 dockerd[1097]: time="2023-06-15T17:02:00.127950539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 15 17:02:00 addons-477000 dockerd[1097]: time="2023-06-15T17:02:00.127982330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 17:02:00 addons-477000 dockerd[1097]: time="2023-06-15T17:02:00.128158870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 15 17:02:00 addons-477000 dockerd[1097]: time="2023-06-15T17:02:00.128170953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                     CREATED             STATE               NAME                         ATTEMPT             POD ID
	9ee1f57e827a2       ghcr.io/headlamp-k8s/headlamp@sha256:9c33d03e6032adc2ae0920bbda10f82aa223d796f99cfb9509608cd389a157be                     11 minutes ago      Running             headlamp                     0                   ef7c71126455e
	6a4bcd8ac64ff       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf              38 minutes ago      Running             gcp-auth                     0                   0c316c00ee755
	8527d6f42bef1       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   39 minutes ago      Running             volume-snapshot-controller   0                   f6bd41ad4abf6
	06a9dab9c48b6       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   39 minutes ago      Running             volume-snapshot-controller   0                   629404aaee996
	256eaaad3894a       97e04611ad434                                                                                                             39 minutes ago      Running             coredns                      0                   f6fc2a0d05c4a
	29b72a92c6578       fb73e92641fd5                                                                                                             39 minutes ago      Running             kube-proxy                   0                   405ca9198a355
	733213e41e3b9       bcb9e554eaab6                                                                                                             39 minutes ago      Running             kube-scheduler               0                   25817e506c78b
	b11fb0f325644       39dfb036b0986                                                                                                             39 minutes ago      Running             kube-apiserver               0                   0dde73a500899
	66de98cb24ea0       ab3683b584ae5                                                                                                             39 minutes ago      Running             kube-controller-manager      0                   69ef168f52131
	41a6909f99a59       24bc64e911039                                                                                                             39 minutes ago      Running             etcd                         0                   9b969e901cc05
	
	* 
	* ==> coredns [256eaaad3894] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:55502 - 31535 "HINFO IN 8156761713541019547.3807690688336836625. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.006087175s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-477000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-477000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1814abc5e40384accb8747bfb7e33027343c9627
	                    minikube.k8s.io/name=addons-477000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_15T09_33_47_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Jun 2023 16:33:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-477000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Jun 2023 17:13:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Jun 2023 17:12:38 +0000   Thu, 15 Jun 2023 16:33:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Jun 2023 17:12:38 +0000   Thu, 15 Jun 2023 16:33:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Jun 2023 17:12:38 +0000   Thu, 15 Jun 2023 16:33:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Jun 2023 17:12:38 +0000   Thu, 15 Jun 2023 16:33:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-477000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 5009a87f17804889a5a4616073b937e0
	  System UUID:                5009a87f17804889a5a4616073b937e0
	  Boot ID:                    9630f686-3c90-436f-98e6-d8c6686f510a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-58478865f7-2pgxv                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39m
	  headlamp                    headlamp-6b5756787-s7gx7                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-5d78c9869d-mds5s                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     39m
	  kube-system                 etcd-addons-477000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         39m
	  kube-system                 kube-apiserver-addons-477000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39m
	  kube-system                 kube-controller-manager-addons-477000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39m
	  kube-system                 kube-proxy-8rgcs                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39m
	  kube-system                 kube-scheduler-addons-477000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39m
	  kube-system                 snapshot-controller-75bbb956b9-p6hk4     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39m
	  kube-system                 snapshot-controller-75bbb956b9-prqv8     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         39m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 39m                kube-proxy       
	  Normal  Starting                 39m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39m (x8 over 39m)  kubelet          Node addons-477000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39m (x8 over 39m)  kubelet          Node addons-477000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39m (x7 over 39m)  kubelet          Node addons-477000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  39m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 39m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  39m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  39m                kubelet          Node addons-477000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39m                kubelet          Node addons-477000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39m                kubelet          Node addons-477000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                39m                kubelet          Node addons-477000 status is now: NodeReady
	  Normal  RegisteredNode           39m                node-controller  Node addons-477000 event: Registered Node addons-477000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] KASLR disabled due to lack of seed
	[  +0.687452] EINJ: EINJ table not found.
	[  +0.627011] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043411] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000812] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.868427] systemd-fstab-generator[480]: Ignoring "noauto" for root device
	[  +0.067044] systemd-fstab-generator[491]: Ignoring "noauto" for root device
	[  +0.422105] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.174556] systemd-fstab-generator[729]: Ignoring "noauto" for root device
	[  +0.069729] systemd-fstab-generator[740]: Ignoring "noauto" for root device
	[  +0.066761] systemd-fstab-generator[753]: Ignoring "noauto" for root device
	[  +1.220689] systemd-fstab-generator[910]: Ignoring "noauto" for root device
	[  +0.067164] systemd-fstab-generator[921]: Ignoring "noauto" for root device
	[  +0.058616] systemd-fstab-generator[932]: Ignoring "noauto" for root device
	[  +0.062347] systemd-fstab-generator[943]: Ignoring "noauto" for root device
	[  +0.069889] systemd-fstab-generator[971]: Ignoring "noauto" for root device
	[  +2.576383] systemd-fstab-generator[1084]: Ignoring "noauto" for root device
	[  +1.530737] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.580821] systemd-fstab-generator[1404]: Ignoring "noauto" for root device
	[  +5.139726] systemd-fstab-generator[2250]: Ignoring "noauto" for root device
	[Jun15 16:34] kauditd_printk_skb: 41 callbacks suppressed
	[  +2.392909] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +4.125194] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.280200] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.114632] kauditd_printk_skb: 3 callbacks suppressed
	
	* 
	* ==> etcd [41a6909f99a5] <==
	* {"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-15T16:33:43.373Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-15T16:33:43.373Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-06-15T16:43:43.957Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":747}
	{"level":"info","ts":"2023-06-15T16:43:43.961Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":747,"took":"2.441695ms","hash":524925281}
	{"level":"info","ts":"2023-06-15T16:43:43.961Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":524925281,"revision":747,"compact-revision":-1}
	{"level":"info","ts":"2023-06-15T16:48:43.971Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":897}
	{"level":"info","ts":"2023-06-15T16:48:43.973Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":897,"took":"1.284283ms","hash":2514030906}
	{"level":"info","ts":"2023-06-15T16:48:43.973Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2514030906,"revision":897,"compact-revision":747}
	{"level":"info","ts":"2023-06-15T16:53:43.979Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1048}
	{"level":"info","ts":"2023-06-15T16:53:43.981Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1048,"took":"857.726µs","hash":834622362}
	{"level":"info","ts":"2023-06-15T16:53:43.981Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":834622362,"revision":1048,"compact-revision":897}
	{"level":"info","ts":"2023-06-15T16:58:43.988Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1198}
	{"level":"info","ts":"2023-06-15T16:58:43.991Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1198,"took":"1.183357ms","hash":1870194874}
	{"level":"info","ts":"2023-06-15T16:58:43.991Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1870194874,"revision":1198,"compact-revision":1048}
	{"level":"info","ts":"2023-06-15T17:03:44.005Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1348}
	{"level":"info","ts":"2023-06-15T17:03:44.007Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1348,"took":"1.080862ms","hash":686046031}
	{"level":"info","ts":"2023-06-15T17:03:44.008Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":686046031,"revision":1348,"compact-revision":1198}
	{"level":"info","ts":"2023-06-15T17:08:44.012Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1541}
	{"level":"info","ts":"2023-06-15T17:08:44.013Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1541,"took":"775.306µs","hash":1174214079}
	{"level":"info","ts":"2023-06-15T17:08:44.013Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1174214079,"revision":1541,"compact-revision":1348}
	
	* 
	* ==> gcp-auth [6a4bcd8ac64f] <==
	* 2023/06/15 16:34:18 GCP Auth Webhook started!
	2023/06/15 17:01:54 Ready to marshal response ...
	2023/06/15 17:01:54 Ready to write response ...
	2023/06/15 17:01:54 Ready to marshal response ...
	2023/06/15 17:01:54 Ready to write response ...
	2023/06/15 17:01:54 Ready to marshal response ...
	2023/06/15 17:01:54 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  17:13:17 up 39 min,  0 users,  load average: 0.68, 0.63, 0.55
	Linux addons-477000 5.10.57 #1 SMP PREEMPT Wed Jun 14 05:08:37 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [b11fb0f32564] <==
	* I0615 16:48:44.765734       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:48:44.766097       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:53:44.751144       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:53:44.751426       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:53:44.766204       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:53:44.766395       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:58:44.752713       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:58:44.753513       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:58:44.754419       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:58:44.754518       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:58:44.763858       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:58:44.764268       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 17:01:54.687994       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs=map[IPv4:10.102.63.39]
	I0615 17:03:44.753468       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 17:03:44.753599       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 17:03:44.754120       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 17:03:44.754191       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 17:03:44.764221       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 17:03:44.764679       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 17:08:44.754513       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 17:08:44.754850       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 17:08:44.763014       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 17:08:44.763355       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 17:08:44.775422       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 17:08:44.775478       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [66de98cb24ea] <==
	* I0615 17:10:14.922136       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0615 17:10:29.922891       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0615 17:10:29.923119       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0615 17:10:44.925499       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0615 17:10:44.926163       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0615 17:10:59.928304       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0615 17:10:59.928796       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0615 17:11:14.929441       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0615 17:11:14.929701       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0615 17:11:29.930167       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0615 17:11:29.930502       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0615 17:11:44.930708       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0615 17:11:44.930962       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0615 17:11:59.931860       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0615 17:11:59.932206       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0615 17:12:14.933591       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0615 17:12:14.933680       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0615 17:12:29.935843       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0615 17:12:29.936501       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0615 17:12:44.937825       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0615 17:12:44.938130       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0615 17:12:59.938607       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0615 17:12:59.938645       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	E0615 17:13:14.939291       1 pv_controller.go:1571] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found" PVC="default/hpvc"
	I0615 17:13:14.939869       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ProvisioningFailed" message="storageclass.storage.k8s.io \"csi-hostpath-sc\" not found"
	
	* 
	* ==> kube-proxy [29b72a92c657] <==
	* I0615 16:34:01.157223       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0615 16:34:01.157274       1 server_others.go:110] "Detected node IP" address="192.168.105.2"
	I0615 16:34:01.157290       1 server_others.go:554] "Using iptables proxy"
	I0615 16:34:01.207136       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0615 16:34:01.207158       1 server_others.go:192] "Using iptables Proxier"
	I0615 16:34:01.207188       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0615 16:34:01.207493       1 server.go:658] "Version info" version="v1.27.3"
	I0615 16:34:01.207499       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0615 16:34:01.208029       1 config.go:188] "Starting service config controller"
	I0615 16:34:01.208049       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0615 16:34:01.208060       1 config.go:97] "Starting endpoint slice config controller"
	I0615 16:34:01.208062       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0615 16:34:01.209533       1 config.go:315] "Starting node config controller"
	I0615 16:34:01.209537       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0615 16:34:01.308743       1 shared_informer.go:318] Caches are synced for service config
	I0615 16:34:01.308782       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0615 16:34:01.309993       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [733213e41e3b] <==
	* W0615 16:33:44.753904       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0615 16:33:44.754011       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0615 16:33:44.754034       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0615 16:33:44.754072       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0615 16:33:44.754021       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0615 16:33:44.754081       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0615 16:33:44.754001       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0615 16:33:44.754100       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0615 16:33:44.754136       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0615 16:33:44.754145       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0615 16:33:45.605616       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0615 16:33:45.605673       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0615 16:33:45.647245       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0615 16:33:45.647292       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0615 16:33:45.699650       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0615 16:33:45.699699       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0615 16:33:45.702358       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0615 16:33:45.702403       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0615 16:33:45.718371       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0615 16:33:45.718408       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0615 16:33:45.723261       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0615 16:33:45.723281       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0615 16:33:45.755043       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0615 16:33:45.755066       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0615 16:33:46.350596       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-06-15 16:33:29 UTC, ends at Thu 2023-06-15 17:13:17 UTC. --
	Jun 15 17:07:47 addons-477000 kubelet[2256]: E0615 17:07:47.330099    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 17:07:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 17:07:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 17:07:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 15 17:08:47 addons-477000 kubelet[2256]: W0615 17:08:47.320097    2256 machine.go:65] Cannot read vendor id correctly, set empty.
	Jun 15 17:08:47 addons-477000 kubelet[2256]: E0615 17:08:47.329780    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 17:08:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 17:08:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 17:08:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 15 17:09:47 addons-477000 kubelet[2256]: E0615 17:09:47.330557    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 17:09:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 17:09:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 17:09:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 15 17:10:47 addons-477000 kubelet[2256]: E0615 17:10:47.330334    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 17:10:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 17:10:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 17:10:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 15 17:11:47 addons-477000 kubelet[2256]: E0615 17:11:47.335557    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 17:11:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 17:11:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 17:11:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 15 17:12:47 addons-477000 kubelet[2256]: E0615 17:12:47.330829    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 17:12:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 17:12:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 17:12:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-477000 -n addons-477000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-477000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/CSI FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/CSI (671.15s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (832.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:831: failed waiting for cloud-spanner-emulator deployment to stabilize: timed out waiting for the condition
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
addons_test.go:833: ***** TestAddons/parallel/CloudSpanner: pod "app=cloud-spanner-emulator" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:833: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-477000 -n addons-477000
addons_test.go:833: TestAddons/parallel/CloudSpanner: showing logs for failed pods as of 2023-06-15 09:52:00.653439 -0700 PDT m=+1174.936876543
addons_test.go:834: failed waiting for app=cloud-spanner-emulator pod: app=cloud-spanner-emulator within 6m0s: context deadline exceeded
addons_test.go:836: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-477000
addons_test.go:836: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-477000: exit status 10 (1m51.486656167s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE: disable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl delete --force --ignore-not-found -f /etc/kubernetes/addons/deployment.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: the path "/etc/kubernetes/addons/deployment.yaml" does not exist
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:837: failed to disable cloud-spanner addon: args "out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-477000" : exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-477000 -n addons-477000
helpers_test.go:244: <<< TestAddons/parallel/CloudSpanner FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CloudSpanner]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-477000 logs -n 25
helpers_test.go:252: TestAddons/parallel/CloudSpanner logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-066000 | jenkins | v1.30.1 | 15 Jun 23 09:32 PDT |                     |
	|         | -p download-only-066000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-066000 | jenkins | v1.30.1 | 15 Jun 23 09:32 PDT |                     |
	|         | -p download-only-066000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT | 15 Jun 23 09:33 PDT |
	| delete  | -p download-only-066000        | download-only-066000 | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT | 15 Jun 23 09:33 PDT |
	| delete  | -p download-only-066000        | download-only-066000 | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT | 15 Jun 23 09:33 PDT |
	| start   | --download-only -p             | binary-mirror-062000 | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT |                     |
	|         | binary-mirror-062000           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49310         |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-062000        | binary-mirror-062000 | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT | 15 Jun 23 09:33 PDT |
	| start   | -p addons-477000               | addons-477000        | jenkins | v1.30.1 | 15 Jun 23 09:33 PDT | 15 Jun 23 09:40 PDT |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-477000        | jenkins | v1.30.1 | 15 Jun 23 09:52 PDT |                     |
	|         | addons-477000                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/15 09:33:17
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0615 09:33:17.135905    1397 out.go:296] Setting OutFile to fd 1 ...
	I0615 09:33:17.136030    1397 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 09:33:17.136034    1397 out.go:309] Setting ErrFile to fd 2...
	I0615 09:33:17.136037    1397 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 09:33:17.136120    1397 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 09:33:17.137161    1397 out.go:303] Setting JSON to false
	I0615 09:33:17.152121    1397 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":168,"bootTime":1686846629,"procs":371,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 09:33:17.152202    1397 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 09:33:17.156891    1397 out.go:177] * [addons-477000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 09:33:17.159887    1397 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 09:33:17.163775    1397 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 09:33:17.159984    1397 notify.go:220] Checking for updates...
	I0615 09:33:17.171800    1397 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 09:33:17.174813    1397 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 09:33:17.177828    1397 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 09:33:17.180704    1397 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 09:33:17.183887    1397 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 09:33:17.187819    1397 out.go:177] * Using the qemu2 driver based on user configuration
	I0615 09:33:17.194783    1397 start.go:297] selected driver: qemu2
	I0615 09:33:17.194788    1397 start.go:884] validating driver "qemu2" against <nil>
	I0615 09:33:17.194794    1397 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 09:33:17.196752    1397 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0615 09:33:17.199793    1397 out.go:177] * Automatically selected the socket_vmnet network
	I0615 09:33:17.201307    1397 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 09:33:17.201331    1397 cni.go:84] Creating CNI manager for ""
	I0615 09:33:17.201335    1397 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 09:33:17.201341    1397 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0615 09:33:17.201349    1397 start_flags.go:319] config:
	{Name:addons-477000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:addons-477000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 09:33:17.201434    1397 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 09:33:17.209829    1397 out.go:177] * Starting control plane node addons-477000 in cluster addons-477000
	I0615 09:33:17.213726    1397 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 09:33:17.213749    1397 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 09:33:17.213768    1397 cache.go:57] Caching tarball of preloaded images
	I0615 09:33:17.213824    1397 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 09:33:17.213830    1397 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 09:33:17.214051    1397 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/config.json ...
	I0615 09:33:17.214063    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/config.json: {Name:mkc1c34b82952aae697463d2d78c6ea098445790 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:17.214292    1397 start.go:365] acquiring machines lock for addons-477000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 09:33:17.214399    1397 start.go:369] acquired machines lock for "addons-477000" in 101.583µs
	I0615 09:33:17.214409    1397 start.go:93] Provisioning new machine with config: &{Name:addons-477000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.3 ClusterName:addons-477000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 09:33:17.214436    1397 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 09:33:17.221743    1397 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0615 09:33:17.568031    1397 start.go:159] libmachine.API.Create for "addons-477000" (driver="qemu2")
	I0615 09:33:17.568071    1397 client.go:168] LocalClient.Create starting
	I0615 09:33:17.568226    1397 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 09:33:17.626803    1397 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 09:33:17.737973    1397 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 09:33:17.902563    1397 main.go:141] libmachine: Creating SSH key...
	I0615 09:33:17.968617    1397 main.go:141] libmachine: Creating Disk image...
	I0615 09:33:17.968623    1397 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 09:33:17.969817    1397 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2
	I0615 09:33:18.004891    1397 main.go:141] libmachine: STDOUT: 
	I0615 09:33:18.004923    1397 main.go:141] libmachine: STDERR: 
	I0615 09:33:18.004982    1397 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2 +20000M
	I0615 09:33:18.012411    1397 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 09:33:18.012436    1397 main.go:141] libmachine: STDERR: 
	I0615 09:33:18.012455    1397 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2
	I0615 09:33:18.012466    1397 main.go:141] libmachine: Starting QEMU VM...
	I0615 09:33:18.012501    1397 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:25:cc:0f:2e:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/disk.qcow2
	I0615 09:33:18.081537    1397 main.go:141] libmachine: STDOUT: 
	I0615 09:33:18.081558    1397 main.go:141] libmachine: STDERR: 
	I0615 09:33:18.081562    1397 main.go:141] libmachine: Attempt 0
	I0615 09:33:18.081577    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:20.083712    1397 main.go:141] libmachine: Attempt 1
	I0615 09:33:20.083961    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:22.086122    1397 main.go:141] libmachine: Attempt 2
	I0615 09:33:22.086166    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:24.088201    1397 main.go:141] libmachine: Attempt 3
	I0615 09:33:24.088224    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:26.090268    1397 main.go:141] libmachine: Attempt 4
	I0615 09:33:26.090325    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:28.092361    1397 main.go:141] libmachine: Attempt 5
	I0615 09:33:28.092379    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:30.094488    1397 main.go:141] libmachine: Attempt 6
	I0615 09:33:30.094575    1397 main.go:141] libmachine: Searching for 1a:25:cc:f:2e:6f in /var/db/dhcpd_leases ...
	I0615 09:33:30.094985    1397 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0615 09:33:30.095099    1397 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:25:cc:f:2e:6f ID:1,1a:25:cc:f:2e:6f Lease:0x648c8ed9}
	I0615 09:33:30.095124    1397 main.go:141] libmachine: Found match: 1a:25:cc:f:2e:6f
	I0615 09:33:30.095168    1397 main.go:141] libmachine: IP: 192.168.105.2
	I0615 09:33:30.095195    1397 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0615 09:33:32.115264    1397 machine.go:88] provisioning docker machine ...
	I0615 09:33:32.115338    1397 buildroot.go:166] provisioning hostname "addons-477000"
	I0615 09:33:32.116828    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.117588    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.117607    1397 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-477000 && echo "addons-477000" | sudo tee /etc/hostname
	I0615 09:33:32.199158    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-477000
	
	I0615 09:33:32.199283    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.199748    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.199763    1397 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-477000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-477000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-477000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0615 09:33:32.260846    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0615 09:33:32.260864    1397 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16718-868/.minikube CaCertPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16718-868/.minikube}
	I0615 09:33:32.260878    1397 buildroot.go:174] setting up certificates
	I0615 09:33:32.260906    1397 provision.go:83] configureAuth start
	I0615 09:33:32.260912    1397 provision.go:138] copyHostCerts
	I0615 09:33:32.261103    1397 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16718-868/.minikube/ca.pem (1078 bytes)
	I0615 09:33:32.261436    1397 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16718-868/.minikube/cert.pem (1123 bytes)
	I0615 09:33:32.262101    1397 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16718-868/.minikube/key.pem (1679 bytes)
	I0615 09:33:32.262442    1397 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16718-868/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca-key.pem org=jenkins.addons-477000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-477000]
	I0615 09:33:32.306279    1397 provision.go:172] copyRemoteCerts
	I0615 09:33:32.306343    1397 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0615 09:33:32.306360    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:33:32.335305    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0615 09:33:32.343471    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0615 09:33:32.351180    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0615 09:33:32.358490    1397 provision.go:86] duration metric: configureAuth took 97.576167ms
	I0615 09:33:32.358498    1397 buildroot.go:189] setting minikube options for container-runtime
	I0615 09:33:32.358950    1397 config.go:182] Loaded profile config "addons-477000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 09:33:32.358995    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.359216    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.359220    1397 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0615 09:33:32.410196    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0615 09:33:32.410204    1397 buildroot.go:70] root file system type: tmpfs
	I0615 09:33:32.410261    1397 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0615 09:33:32.410301    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.410550    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.410587    1397 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0615 09:33:32.468329    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0615 09:33:32.468380    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.468634    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.468643    1397 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0615 09:33:32.794674    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0615 09:33:32.794698    1397 machine.go:91] provisioned docker machine in 679.423792ms
	I0615 09:33:32.794704    1397 client.go:171] LocalClient.Create took 15.226996125s
	I0615 09:33:32.794723    1397 start.go:167] duration metric: libmachine.API.Create for "addons-477000" took 15.227064791s
	I0615 09:33:32.794726    1397 start.go:300] post-start starting for "addons-477000" (driver="qemu2")
	I0615 09:33:32.794731    1397 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0615 09:33:32.794812    1397 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0615 09:33:32.794822    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:33:32.823879    1397 ssh_runner.go:195] Run: cat /etc/os-release
	I0615 09:33:32.825122    1397 info.go:137] Remote host: Buildroot 2021.02.12
	I0615 09:33:32.825128    1397 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16718-868/.minikube/addons for local assets ...
	I0615 09:33:32.825196    1397 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16718-868/.minikube/files for local assets ...
	I0615 09:33:32.825222    1397 start.go:303] post-start completed in 30.494125ms
	I0615 09:33:32.825555    1397 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/config.json ...
	I0615 09:33:32.825707    1397 start.go:128] duration metric: createHost completed in 15.611646375s
	I0615 09:33:32.825734    1397 main.go:141] libmachine: Using SSH client type: native
	I0615 09:33:32.825947    1397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102d08e20] 0x102d0b880 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0615 09:33:32.825951    1397 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0615 09:33:32.876753    1397 main.go:141] libmachine: SSH cmd err, output: <nil>: 1686846813.320713543
	
	I0615 09:33:32.876760    1397 fix.go:206] guest clock: 1686846813.320713543
	I0615 09:33:32.876764    1397 fix.go:219] Guest: 2023-06-15 09:33:33.320713543 -0700 PDT Remote: 2023-06-15 09:33:32.825711 -0700 PDT m=+15.708594751 (delta=495.002543ms)
	I0615 09:33:32.876775    1397 fix.go:190] guest clock delta is within tolerance: 495.002543ms
	I0615 09:33:32.876778    1397 start.go:83] releasing machines lock for "addons-477000", held for 15.662753208s
	I0615 09:33:32.877060    1397 ssh_runner.go:195] Run: cat /version.json
	I0615 09:33:32.877067    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:33:32.877085    1397 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0615 09:33:32.877121    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:33:32.951587    1397 ssh_runner.go:195] Run: systemctl --version
	I0615 09:33:32.953983    1397 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0615 09:33:32.956008    1397 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0615 09:33:32.956040    1397 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0615 09:33:32.961754    1397 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0615 09:33:32.961761    1397 start.go:466] detecting cgroup driver to use...
	I0615 09:33:32.961877    1397 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0615 09:33:32.967359    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0615 09:33:32.970783    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0615 09:33:32.973872    1397 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0615 09:33:32.973908    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0615 09:33:32.976794    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0615 09:33:32.979871    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0615 09:33:32.983273    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0615 09:33:32.986847    1397 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0615 09:33:32.990009    1397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0615 09:33:32.992910    1397 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0615 09:33:32.995885    1397 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0615 09:33:32.999181    1397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 09:33:33.082046    1397 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0615 09:33:33.090305    1397 start.go:466] detecting cgroup driver to use...
	I0615 09:33:33.090367    1397 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0615 09:33:33.095444    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0615 09:33:33.099628    1397 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0615 09:33:33.106008    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0615 09:33:33.110583    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0615 09:33:33.115305    1397 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0615 09:33:33.157221    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0615 09:33:33.165685    1397 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0615 09:33:33.171700    1397 ssh_runner.go:195] Run: which cri-dockerd
	I0615 09:33:33.173347    1397 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0615 09:33:33.176671    1397 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0615 09:33:33.184036    1397 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0615 09:33:33.256172    1397 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0615 09:33:33.326477    1397 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0615 09:33:33.326492    1397 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0615 09:33:33.331797    1397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 09:33:33.394602    1397 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0615 09:33:34.551420    1397 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.1568305s)
	I0615 09:33:34.551480    1397 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0615 09:33:34.614918    1397 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0615 09:33:34.680379    1397 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0615 09:33:34.741670    1397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 09:33:34.802995    1397 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0615 09:33:34.810702    1397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 09:33:34.876193    1397 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0615 09:33:34.899281    1397 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0615 09:33:34.899375    1397 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0615 09:33:34.902005    1397 start.go:534] Will wait 60s for crictl version
	I0615 09:33:34.902039    1397 ssh_runner.go:195] Run: which crictl
	I0615 09:33:34.903665    1397 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0615 09:33:34.922827    1397 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0615 09:33:34.922910    1397 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0615 09:33:34.936535    1397 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0615 09:33:34.948006    1397 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.2 ...
	I0615 09:33:34.948101    1397 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0615 09:33:34.949468    1397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0615 09:33:34.953059    1397 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 09:33:34.953103    1397 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0615 09:33:34.958178    1397 docker.go:636] Got preloaded images: 
	I0615 09:33:34.958185    1397 docker.go:642] registry.k8s.io/kube-apiserver:v1.27.3 wasn't preloaded
	I0615 09:33:34.958223    1397 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0615 09:33:34.960919    1397 ssh_runner.go:195] Run: which lz4
	I0615 09:33:34.962156    1397 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0615 09:33:34.963566    1397 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0615 09:33:34.963580    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635922 bytes)
	I0615 09:33:36.258372    1397 docker.go:600] Took 1.296282 seconds to copy over tarball
	I0615 09:33:36.258440    1397 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0615 09:33:37.363016    1397 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.104582708s)
	I0615 09:33:37.363034    1397 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0615 09:33:37.379849    1397 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0615 09:33:37.383479    1397 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0615 09:33:37.388752    1397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 09:33:37.449408    1397 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0615 09:33:38.998025    1397 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.548624125s)
	I0615 09:33:38.998130    1397 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0615 09:33:39.004063    1397 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0615 09:33:39.004075    1397 cache_images.go:84] Images are preloaded, skipping loading
	I0615 09:33:39.004149    1397 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0615 09:33:39.011968    1397 cni.go:84] Creating CNI manager for ""
	I0615 09:33:39.011977    1397 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 09:33:39.012000    1397 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0615 09:33:39.012011    1397 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-477000 NodeName:addons-477000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0615 09:33:39.012111    1397 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-477000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0615 09:33:39.012156    1397 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-477000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:addons-477000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0615 09:33:39.012203    1397 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0615 09:33:39.015681    1397 binaries.go:44] Found k8s binaries, skipping transfer
	I0615 09:33:39.015718    1397 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0615 09:33:39.018849    1397 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I0615 09:33:39.023920    1397 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0615 09:33:39.028733    1397 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0615 09:33:39.033571    1397 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0615 09:33:39.034818    1397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0615 09:33:39.038909    1397 certs.go:56] Setting up /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000 for IP: 192.168.105.2
	I0615 09:33:39.038918    1397 certs.go:190] acquiring lock for shared ca certs: {Name:mk9ee4d7ca68f2cc32c8609d33f6ce33c43a91d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.039073    1397 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key
	I0615 09:33:39.109209    1397 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt ...
	I0615 09:33:39.109214    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt: {Name:mka7538e8370ad0560f47e28d206b077e2dbbef8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.109425    1397 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key ...
	I0615 09:33:39.109428    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key: {Name:mkca6c7de675216938ac1a6663738af412e2d280 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.109532    1397 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key
	I0615 09:33:39.219574    1397 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.crt ...
	I0615 09:33:39.219577    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.crt: {Name:mk21a595039c96735254391e5270364a73e52306 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.219709    1397 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key ...
	I0615 09:33:39.219712    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key: {Name:mk96cab9f1987887c2b313cd365bdba518ec818b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.219826    1397 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.key
	I0615 09:33:39.219831    1397 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt with IP's: []
	I0615 09:33:39.435828    1397 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt ...
	I0615 09:33:39.435835    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt: {Name:mk0f2105a4c5fdba007e9c77c7945365dc3f96af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.436029    1397 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.key ...
	I0615 09:33:39.436031    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.key: {Name:mk65e491f4b4c1ee8d05045efb9265b2c697a551 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.436124    1397 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key.96055969
	I0615 09:33:39.436133    1397 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0615 09:33:39.510125    1397 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt.96055969 ...
	I0615 09:33:39.510129    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt.96055969: {Name:mk7c90d062166950585957cb3f0ce136594c9cd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.510277    1397 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key.96055969 ...
	I0615 09:33:39.510280    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key.96055969: {Name:mk4266445b8f2d5bc078d169ee24b8765955e2f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.510384    1397 certs.go:337] copying /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt
	I0615 09:33:39.510598    1397 certs.go:341] copying /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key
	I0615 09:33:39.510713    1397 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.key
	I0615 09:33:39.510723    1397 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.crt with IP's: []
	I0615 09:33:39.610633    1397 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.crt ...
	I0615 09:33:39.610637    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.crt: {Name:mk04e06c13fe3eccffb62f328096a02f5668baa1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.610779    1397 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.key ...
	I0615 09:33:39.610783    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.key: {Name:mk21e3c8e84fcac9a2d9da5e0fa06b26ad1ee7cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:33:39.611042    1397 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca-key.pem (1679 bytes)
	I0615 09:33:39.611072    1397 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem (1078 bytes)
	I0615 09:33:39.611094    1397 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem (1123 bytes)
	I0615 09:33:39.611429    1397 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/key.pem (1679 bytes)
	I0615 09:33:39.611960    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0615 09:33:39.619546    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0615 09:33:39.626711    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0615 09:33:39.633501    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0615 09:33:39.640010    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0615 09:33:39.647112    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0615 09:33:39.654063    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0615 09:33:39.660582    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0615 09:33:39.667533    1397 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0615 09:33:39.674545    1397 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0615 09:33:39.680339    1397 ssh_runner.go:195] Run: openssl version
	I0615 09:33:39.682379    1397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0615 09:33:39.685404    1397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0615 09:33:39.686832    1397 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 15 16:33 /usr/share/ca-certificates/minikubeCA.pem
	I0615 09:33:39.686855    1397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0615 09:33:39.688703    1397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0615 09:33:39.691957    1397 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0615 09:33:39.693381    1397 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0615 09:33:39.693418    1397 kubeadm.go:404] StartCluster: {Name:addons-477000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.27.3 ClusterName:addons-477000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 09:33:39.693485    1397 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0615 09:33:39.699291    1397 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0615 09:33:39.702928    1397 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0615 09:33:39.706168    1397 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0615 09:33:39.708902    1397 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0615 09:33:39.708925    1397 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0615 09:33:39.731022    1397 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0615 09:33:39.731051    1397 kubeadm.go:322] [preflight] Running pre-flight checks
	I0615 09:33:39.787198    1397 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0615 09:33:39.787252    1397 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0615 09:33:39.787291    1397 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0615 09:33:39.845524    1397 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0615 09:33:39.853720    1397 out.go:204]   - Generating certificates and keys ...
	I0615 09:33:39.853771    1397 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0615 09:33:39.853800    1397 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0615 09:33:40.047052    1397 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0615 09:33:40.281668    1397 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0615 09:33:40.373604    1397 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0615 09:33:40.496002    1397 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0615 09:33:40.752895    1397 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0615 09:33:40.752975    1397 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-477000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0615 09:33:40.889354    1397 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0615 09:33:40.889424    1397 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-477000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0615 09:33:40.967392    1397 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0615 09:33:41.132487    1397 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0615 09:33:41.175551    1397 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0615 09:33:41.175583    1397 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0615 09:33:41.275708    1397 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0615 09:33:41.313261    1397 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0615 09:33:41.394612    1397 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0615 09:33:41.488793    1397 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0615 09:33:41.495623    1397 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0615 09:33:41.495672    1397 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0615 09:33:41.495691    1397 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0615 09:33:41.565044    1397 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0615 09:33:41.570236    1397 out.go:204]   - Booting up control plane ...
	I0615 09:33:41.570302    1397 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0615 09:33:41.570344    1397 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0615 09:33:41.570389    1397 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0615 09:33:41.570430    1397 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0615 09:33:41.570514    1397 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0615 09:33:45.571765    1397 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.003295 seconds
	I0615 09:33:45.571857    1397 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0615 09:33:45.577408    1397 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0615 09:33:46.094757    1397 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0615 09:33:46.095006    1397 kubeadm.go:322] [mark-control-plane] Marking the node addons-477000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0615 09:33:46.613385    1397 kubeadm.go:322] [bootstrap-token] Using token: f4kg8y.q60xaa2tn5uwspbb
	I0615 09:33:46.619341    1397 out.go:204]   - Configuring RBAC rules ...
	I0615 09:33:46.619403    1397 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0615 09:33:46.620813    1397 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0615 09:33:46.624663    1397 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0615 09:33:46.625913    1397 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0615 09:33:46.627185    1397 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0615 09:33:46.628306    1397 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0615 09:33:46.632523    1397 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0615 09:33:46.806353    1397 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0615 09:33:47.022461    1397 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0615 09:33:47.022733    1397 kubeadm.go:322] 
	I0615 09:33:47.022774    1397 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0615 09:33:47.022780    1397 kubeadm.go:322] 
	I0615 09:33:47.022834    1397 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0615 09:33:47.022839    1397 kubeadm.go:322] 
	I0615 09:33:47.022851    1397 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0615 09:33:47.022879    1397 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0615 09:33:47.022912    1397 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0615 09:33:47.022916    1397 kubeadm.go:322] 
	I0615 09:33:47.022952    1397 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0615 09:33:47.022958    1397 kubeadm.go:322] 
	I0615 09:33:47.022992    1397 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0615 09:33:47.022995    1397 kubeadm.go:322] 
	I0615 09:33:47.023016    1397 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0615 09:33:47.023050    1397 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0615 09:33:47.023081    1397 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0615 09:33:47.023083    1397 kubeadm.go:322] 
	I0615 09:33:47.023121    1397 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0615 09:33:47.023158    1397 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0615 09:33:47.023161    1397 kubeadm.go:322] 
	I0615 09:33:47.023197    1397 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token f4kg8y.q60xaa2tn5uwspbb \
	I0615 09:33:47.023261    1397 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:13ec1e4f42825d0267b189c0f8b830eda54e60d63681b653b209256c58602b59 \
	I0615 09:33:47.023273    1397 kubeadm.go:322] 	--control-plane 
	I0615 09:33:47.023278    1397 kubeadm.go:322] 
	I0615 09:33:47.023320    1397 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0615 09:33:47.023326    1397 kubeadm.go:322] 
	I0615 09:33:47.023380    1397 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token f4kg8y.q60xaa2tn5uwspbb \
	I0615 09:33:47.023443    1397 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:13ec1e4f42825d0267b189c0f8b830eda54e60d63681b653b209256c58602b59 
	I0615 09:33:47.023525    1397 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0615 09:33:47.023586    1397 cni.go:84] Creating CNI manager for ""
	I0615 09:33:47.023594    1397 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 09:33:47.031274    1397 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0615 09:33:47.035321    1397 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0615 09:33:47.038799    1397 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0615 09:33:47.043709    1397 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0615 09:33:47.043747    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:47.043800    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=1814abc5e40384accb8747bfb7e33027343c9627 minikube.k8s.io/name=addons-477000 minikube.k8s.io/updated_at=2023_06_15T09_33_47_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:47.095849    1397 ops.go:34] apiserver oom_adj: -16
	I0615 09:33:47.095898    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:47.645147    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:48.145079    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:48.645093    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:49.145044    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:49.645148    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:50.145134    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:50.645328    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:51.145310    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:51.645116    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:52.144609    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:52.645278    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:53.145243    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:53.645239    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:54.145000    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:54.644744    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:55.145233    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:55.644949    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:56.145008    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:56.644938    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:57.143430    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:57.645224    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:58.144898    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:58.644909    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:59.144773    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:33:59.644338    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:34:00.144834    1397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 09:34:00.206574    1397 kubeadm.go:1081] duration metric: took 13.163176875s to wait for elevateKubeSystemPrivileges.
	I0615 09:34:00.206587    1397 kubeadm.go:406] StartCluster complete in 20.513668625s
	I0615 09:34:00.206614    1397 settings.go:142] acquiring lock: {Name:mk45a698fcd8dd8ae6984c9cf4ad4d183fdb5424 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:34:00.206769    1397 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 09:34:00.206961    1397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/kubeconfig: {Name:mkbe9cac04fb467055323f2e3d5db2c6ddc287ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:34:00.207185    1397 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0615 09:34:00.207249    1397 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0615 09:34:00.207317    1397 addons.go:66] Setting ingress=true in profile "addons-477000"
	I0615 09:34:00.207322    1397 addons.go:66] Setting ingress-dns=true in profile "addons-477000"
	I0615 09:34:00.207325    1397 addons.go:228] Setting addon ingress=true in "addons-477000"
	I0615 09:34:00.207327    1397 addons.go:228] Setting addon ingress-dns=true in "addons-477000"
	I0615 09:34:00.207355    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207358    1397 addons.go:66] Setting cloud-spanner=true in profile "addons-477000"
	I0615 09:34:00.207362    1397 addons.go:228] Setting addon cloud-spanner=true in "addons-477000"
	I0615 09:34:00.207371    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207402    1397 addons.go:66] Setting metrics-server=true in profile "addons-477000"
	I0615 09:34:00.207421    1397 addons.go:66] Setting registry=true in profile "addons-477000"
	I0615 09:34:00.207457    1397 addons.go:228] Setting addon registry=true in "addons-477000"
	I0615 09:34:00.207434    1397 addons.go:66] Setting inspektor-gadget=true in profile "addons-477000"
	I0615 09:34:00.207483    1397 addons.go:228] Setting addon inspektor-gadget=true in "addons-477000"
	I0615 09:34:00.207494    1397 addons.go:228] Setting addon metrics-server=true in "addons-477000"
	I0615 09:34:00.207502    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207355    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207433    1397 addons.go:66] Setting default-storageclass=true in profile "addons-477000"
	I0615 09:34:00.207531    1397 config.go:182] Loaded profile config "addons-477000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 09:34:00.207537    1397 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-477000"
	I0615 09:34:00.207575    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207475    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207436    1397 addons.go:66] Setting csi-hostpath-driver=true in profile "addons-477000"
	I0615 09:34:00.207676    1397 addons.go:228] Setting addon csi-hostpath-driver=true in "addons-477000"
	I0615 09:34:00.207687    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207317    1397 addons.go:66] Setting volumesnapshots=true in profile "addons-477000"
	I0615 09:34:00.207735    1397 addons.go:228] Setting addon volumesnapshots=true in "addons-477000"
	I0615 09:34:00.207746    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.207438    1397 addons.go:66] Setting gcp-auth=true in profile "addons-477000"
	I0615 09:34:00.207776    1397 mustload.go:65] Loading cluster: addons-477000
	I0615 09:34:00.207433    1397 addons.go:66] Setting storage-provisioner=true in profile "addons-477000"
	I0615 09:34:00.208143    1397 addons.go:228] Setting addon storage-provisioner=true in "addons-477000"
	I0615 09:34:00.208157    1397 host.go:66] Checking if "addons-477000" exists ...
	W0615 09:34:00.208299    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208325    1397 addons.go:274] "addons-477000" is not running, setting registry=true and skipping enablement (err=<nil>)
	W0615 09:34:00.208331    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208338    1397 addons.go:274] "addons-477000" is not running, setting inspektor-gadget=true and skipping enablement (err=<nil>)
	I0615 09:34:00.208333    1397 addons.go:464] Verifying addon registry=true in "addons-477000"
	I0615 09:34:00.211629    1397 out.go:177] * Verifying registry addon...
	W0615 09:34:00.208373    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	I0615 09:34:00.207993    1397 config.go:182] Loaded profile config "addons-477000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	W0615 09:34:00.208397    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208133    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208437    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208601    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	W0615 09:34:00.208662    1397 host.go:54] host status for "addons-477000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/monitor: connect: connection refused
	I0615 09:34:00.215730    1397 addons.go:228] Setting addon default-storageclass=true in "addons-477000"
	W0615 09:34:00.218550    1397 addons.go:274] "addons-477000" is not running, setting metrics-server=true and skipping enablement (err=<nil>)
	W0615 09:34:00.218559    1397 addons.go:274] "addons-477000" is not running, setting ingress=true and skipping enablement (err=<nil>)
	W0615 09:34:00.218576    1397 addons.go:274] "addons-477000" is not running, setting csi-hostpath-driver=true and skipping enablement (err=<nil>)
	W0615 09:34:00.218576    1397 addons.go:274] "addons-477000" is not running, setting cloud-spanner=true and skipping enablement (err=<nil>)
	W0615 09:34:00.218580    1397 addons.go:274] "addons-477000" is not running, setting storage-provisioner=true and skipping enablement (err=<nil>)
	W0615 09:34:00.218593    1397 addons.go:274] "addons-477000" is not running, setting ingress-dns=true and skipping enablement (err=<nil>)
	I0615 09:34:00.218944    1397 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0615 09:34:00.219273    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.221576    1397 addons.go:464] Verifying addon metrics-server=true in "addons-477000"
	I0615 09:34:00.221583    1397 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0615 09:34:00.224623    1397 addons.go:420] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0615 09:34:00.224631    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0615 09:34:00.224638    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:34:00.221629    1397 addons.go:464] Verifying addon ingress=true in "addons-477000"
	I0615 09:34:00.221636    1397 addons.go:464] Verifying addon csi-hostpath-driver=true in "addons-477000"
	I0615 09:34:00.221723    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:00.228543    1397 out.go:177] * Verifying ingress addon...
	I0615 09:34:00.238557    1397 out.go:177] * Verifying csi-hostpath-driver addon...
	I0615 09:34:00.229248    1397 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0615 09:34:00.235988    1397 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0615 09:34:00.241352    1397 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=registry
	I0615 09:34:00.242560    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0615 09:34:00.242594    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:34:00.242973    1397 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0615 09:34:00.245385    1397 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0615 09:34:00.251897    1397 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0615 09:34:00.275682    1397 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0615 09:34:00.278596    1397 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0615 09:34:00.278604    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0615 09:34:00.310787    1397 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0615 09:34:00.310799    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0615 09:34:00.340750    1397 addons.go:420] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0615 09:34:00.340763    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0615 09:34:00.370147    1397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0615 09:34:00.374756    1397 addons.go:420] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0615 09:34:00.374766    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0615 09:34:00.391483    1397 addons.go:420] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0615 09:34:00.391493    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0615 09:34:00.396735    1397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0615 09:34:00.725129    1397 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-477000" context rescaled to 1 replicas
	I0615 09:34:00.725155    1397 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 09:34:00.731827    1397 out.go:177] * Verifying Kubernetes components...
	I0615 09:34:00.735986    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0615 09:34:01.130555    1397 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	W0615 09:34:01.273320    1397 addons.go:446] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0615 09:34:01.273347    1397 retry.go:31] will retry after 358.412085ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0615 09:34:01.273795    1397 node_ready.go:35] waiting up to 6m0s for node "addons-477000" to be "Ready" ...
	I0615 09:34:01.275272    1397 node_ready.go:49] node "addons-477000" has status "Ready":"True"
	I0615 09:34:01.275281    1397 node_ready.go:38] duration metric: took 1.477792ms waiting for node "addons-477000" to be "Ready" ...
	I0615 09:34:01.275284    1397 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0615 09:34:01.279498    1397 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-c5j9m" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:01.633151    1397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0615 09:34:02.299497    1397 pod_ready.go:92] pod "coredns-5d78c9869d-c5j9m" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.299518    1397 pod_ready.go:81] duration metric: took 1.020034208s waiting for pod "coredns-5d78c9869d-c5j9m" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.299526    1397 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-mds5s" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.303736    1397 pod_ready.go:92] pod "coredns-5d78c9869d-mds5s" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.303743    1397 pod_ready.go:81] duration metric: took 4.212458ms waiting for pod "coredns-5d78c9869d-mds5s" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.303749    1397 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.307230    1397 pod_ready.go:92] pod "etcd-addons-477000" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.307237    1397 pod_ready.go:81] duration metric: took 3.484042ms waiting for pod "etcd-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.307243    1397 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.311004    1397 pod_ready.go:92] pod "kube-apiserver-addons-477000" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.311013    1397 pod_ready.go:81] duration metric: took 3.766916ms waiting for pod "kube-apiserver-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.311019    1397 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.481809    1397 pod_ready.go:92] pod "kube-controller-manager-addons-477000" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.481828    1397 pod_ready.go:81] duration metric: took 170.807958ms waiting for pod "kube-controller-manager-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.481838    1397 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8rgcs" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.883885    1397 pod_ready.go:92] pod "kube-proxy-8rgcs" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:02.883919    1397 pod_ready.go:81] duration metric: took 402.082375ms waiting for pod "kube-proxy-8rgcs" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:02.883933    1397 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:03.277736    1397 pod_ready.go:92] pod "kube-scheduler-addons-477000" in "kube-system" namespace has status "Ready":"True"
	I0615 09:34:03.277748    1397 pod_ready.go:81] duration metric: took 393.817875ms waiting for pod "kube-scheduler-addons-477000" in "kube-system" namespace to be "Ready" ...
	I0615 09:34:03.277754    1397 pod_ready.go:38] duration metric: took 2.002511417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0615 09:34:03.277768    1397 api_server.go:52] waiting for apiserver process to appear ...
	I0615 09:34:03.277845    1397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0615 09:34:03.986804    1397 api_server.go:72] duration metric: took 3.261712416s to wait for apiserver process to appear ...
	I0615 09:34:03.986816    1397 api_server.go:88] waiting for apiserver healthz status ...
	I0615 09:34:03.986824    1397 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0615 09:34:03.986882    1397 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.353767917s)
	I0615 09:34:03.990093    1397 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0615 09:34:03.990734    1397 api_server.go:141] control plane version: v1.27.3
	I0615 09:34:03.990742    1397 api_server.go:131] duration metric: took 3.923291ms to wait for apiserver health ...
	I0615 09:34:03.990745    1397 system_pods.go:43] waiting for kube-system pods to appear ...
	I0615 09:34:03.993833    1397 system_pods.go:59] 9 kube-system pods found
	I0615 09:34:03.993840    1397 system_pods.go:61] "coredns-5d78c9869d-c5j9m" [029d925c-5975-4103-a809-e22de9cdf1be] Running
	I0615 09:34:03.993843    1397 system_pods.go:61] "coredns-5d78c9869d-mds5s" [6851e0cd-a4cb-4c11-8b37-6e07dedcb90e] Running
	I0615 09:34:03.993845    1397 system_pods.go:61] "etcd-addons-477000" [41610910-d77e-4a7d-8f06-bcf205880f06] Running
	I0615 09:34:03.993848    1397 system_pods.go:61] "kube-apiserver-addons-477000" [49f23173-0ac0-4711-9c55-a96c00aa0881] Running
	I0615 09:34:03.993851    1397 system_pods.go:61] "kube-controller-manager-addons-477000" [befaa390-5c21-4b4a-930f-d4c4c3559d3d] Running
	I0615 09:34:03.993853    1397 system_pods.go:61] "kube-proxy-8rgcs" [6b460d21-5a79-40c6-84a1-e0551f8e91b9] Running
	I0615 09:34:03.993855    1397 system_pods.go:61] "kube-scheduler-addons-477000" [995baab0-4f21-4690-baf3-4c4682320ca6] Running
	I0615 09:34:03.993859    1397 system_pods.go:61] "snapshot-controller-75bbb956b9-p6hk4" [f6c07a51-e0a4-4ed6-a2fb-f927be1aa9f7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0615 09:34:03.993864    1397 system_pods.go:61] "snapshot-controller-75bbb956b9-prqv8" [368ee712-d2ca-4c1b-9d87-bffa3c354d7c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0615 09:34:03.993866    1397 system_pods.go:74] duration metric: took 3.119166ms to wait for pod list to return data ...
	I0615 09:34:03.993869    1397 default_sa.go:34] waiting for default service account to be created ...
	I0615 09:34:03.995049    1397 default_sa.go:45] found service account: "default"
	I0615 09:34:03.995055    1397 default_sa.go:55] duration metric: took 1.183708ms for default service account to be created ...
	I0615 09:34:03.995057    1397 system_pods.go:116] waiting for k8s-apps to be running ...
	I0615 09:34:03.998400    1397 system_pods.go:86] 9 kube-system pods found
	I0615 09:34:03.998409    1397 system_pods.go:89] "coredns-5d78c9869d-c5j9m" [029d925c-5975-4103-a809-e22de9cdf1be] Running
	I0615 09:34:03.998411    1397 system_pods.go:89] "coredns-5d78c9869d-mds5s" [6851e0cd-a4cb-4c11-8b37-6e07dedcb90e] Running
	I0615 09:34:03.998414    1397 system_pods.go:89] "etcd-addons-477000" [41610910-d77e-4a7d-8f06-bcf205880f06] Running
	I0615 09:34:03.998416    1397 system_pods.go:89] "kube-apiserver-addons-477000" [49f23173-0ac0-4711-9c55-a96c00aa0881] Running
	I0615 09:34:03.998419    1397 system_pods.go:89] "kube-controller-manager-addons-477000" [befaa390-5c21-4b4a-930f-d4c4c3559d3d] Running
	I0615 09:34:03.998421    1397 system_pods.go:89] "kube-proxy-8rgcs" [6b460d21-5a79-40c6-84a1-e0551f8e91b9] Running
	I0615 09:34:03.998424    1397 system_pods.go:89] "kube-scheduler-addons-477000" [995baab0-4f21-4690-baf3-4c4682320ca6] Running
	I0615 09:34:03.998429    1397 system_pods.go:89] "snapshot-controller-75bbb956b9-p6hk4" [f6c07a51-e0a4-4ed6-a2fb-f927be1aa9f7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0615 09:34:03.998433    1397 system_pods.go:89] "snapshot-controller-75bbb956b9-prqv8" [368ee712-d2ca-4c1b-9d87-bffa3c354d7c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0615 09:34:03.998436    1397 system_pods.go:126] duration metric: took 3.376208ms to wait for k8s-apps to be running ...
	I0615 09:34:03.998439    1397 system_svc.go:44] waiting for kubelet service to be running ....
	I0615 09:34:03.998489    1397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0615 09:34:04.003913    1397 system_svc.go:56] duration metric: took 5.471458ms WaitForService to wait for kubelet.
	I0615 09:34:04.003921    1397 kubeadm.go:581] duration metric: took 3.278833625s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0615 09:34:04.003932    1397 node_conditions.go:102] verifying NodePressure condition ...
	I0615 09:34:04.077208    1397 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0615 09:34:04.077239    1397 node_conditions.go:123] node cpu capacity is 2
	I0615 09:34:04.077244    1397 node_conditions.go:105] duration metric: took 73.311333ms to run NodePressure ...
	I0615 09:34:04.077249    1397 start.go:228] waiting for startup goroutines ...
	I0615 09:34:06.831960    1397 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0615 09:34:06.832053    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:34:06.882622    1397 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0615 09:34:06.891297    1397 addons.go:228] Setting addon gcp-auth=true in "addons-477000"
	I0615 09:34:06.891339    1397 host.go:66] Checking if "addons-477000" exists ...
	I0615 09:34:06.892599    1397 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0615 09:34:06.892612    1397 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/addons-477000/id_rsa Username:docker}
	I0615 09:34:06.928262    1397 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0615 09:34:06.932997    1397 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0615 09:34:06.937187    1397 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0615 09:34:06.937194    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0615 09:34:06.943495    1397 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0615 09:34:06.943502    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0615 09:34:06.949337    1397 addons.go:420] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0615 09:34:06.949343    1397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0615 09:34:06.954968    1397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0615 09:34:07.475178    1397 addons.go:464] Verifying addon gcp-auth=true in "addons-477000"
	I0615 09:34:07.478304    1397 out.go:177] * Verifying gcp-auth addon...
	I0615 09:34:07.485666    1397 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0615 09:34:07.491991    1397 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0615 09:34:07.492002    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:07.996710    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:08.496921    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:09.002133    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:09.494606    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:09.995080    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:10.495704    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:10.995530    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:11.495877    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:12.001470    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:12.497446    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:13.001473    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:13.502268    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:13.997362    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:14.503184    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:14.997798    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:15.495991    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:16.000278    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:16.501895    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:17.001719    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:17.495416    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:17.995757    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:18.496835    1397 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0615 09:34:19.002478    1397 kapi.go:107] duration metric: took 11.51706925s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0615 09:34:19.008171    1397 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-477000 cluster.
	I0615 09:34:19.011889    1397 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0615 09:34:19.016120    1397 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0615 09:40:00.215049    1397 kapi.go:107] duration metric: took 6m0.00478975s to wait for kubernetes.io/minikube-addons=registry ...
	W0615 09:40:00.215455    1397 out.go:239] ! Enabling 'registry' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=registry pods: context deadline exceeded]
	I0615 09:40:00.235888    1397 kapi.go:107] duration metric: took 6m0.001641708s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0615 09:40:00.235937    1397 kapi.go:107] duration metric: took 6m0.008679083s to wait for app.kubernetes.io/name=ingress-nginx ...
	W0615 09:40:00.236028    1397 out.go:239] ! Enabling 'csi-hostpath-driver' returned an error: running callbacks: [waiting for kubernetes.io/minikube-addons=csi-hostpath-driver pods: context deadline exceeded]
	W0615 09:40:00.236088    1397 out.go:239] ! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
	I0615 09:40:00.243957    1397 out.go:177] * Enabled addons: inspektor-gadget, metrics-server, cloud-spanner, ingress-dns, storage-provisioner, default-storageclass, volumesnapshots, gcp-auth
	I0615 09:40:00.250970    1397 addons.go:499] enable addons completed in 6m0.052449292s: enabled=[inspektor-gadget metrics-server cloud-spanner ingress-dns storage-provisioner default-storageclass volumesnapshots gcp-auth]
	I0615 09:40:00.251042    1397 start.go:233] waiting for cluster config update ...
	I0615 09:40:00.251069    1397 start.go:242] writing updated cluster config ...
	I0615 09:40:00.255738    1397 ssh_runner.go:195] Run: rm -f paused
	I0615 09:40:00.403218    1397 start.go:582] kubectl: 1.25.9, cluster: 1.27.3 (minor skew: 2)
	I0615 09:40:00.405982    1397 out.go:177] 
	W0615 09:40:00.410033    1397 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.3.
	I0615 09:40:00.413868    1397 out.go:177]   - Want kubectl v1.27.3? Try 'minikube kubectl -- get pods -A'
	I0615 09:40:00.421952    1397 out.go:177] * Done! kubectl is now configured to use "addons-477000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-06-15 16:33:29 UTC, ends at Thu 2023-06-15 16:53:52 UTC. --
	Jun 15 16:34:12 addons-477000 dockerd[1097]: time="2023-06-15T16:34:12.712778061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 15 16:34:12 addons-477000 dockerd[1097]: time="2023-06-15T16:34:12.712798672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 16:34:12 addons-477000 dockerd[1091]: time="2023-06-15T16:34:12.754255249Z" level=info msg="ignoring event" container=784517c6a1ba995d5a0ac3e06a0c3b3112e1ab80318bad66824ce915bcadf04c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 15 16:34:12 addons-477000 dockerd[1097]: time="2023-06-15T16:34:12.754320625Z" level=info msg="shim disconnected" id=784517c6a1ba995d5a0ac3e06a0c3b3112e1ab80318bad66824ce915bcadf04c namespace=moby
	Jun 15 16:34:12 addons-477000 dockerd[1097]: time="2023-06-15T16:34:12.754344286Z" level=warning msg="cleaning up after shim disconnected" id=784517c6a1ba995d5a0ac3e06a0c3b3112e1ab80318bad66824ce915bcadf04c namespace=moby
	Jun 15 16:34:12 addons-477000 dockerd[1097]: time="2023-06-15T16:34:12.754349480Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 15 16:34:13 addons-477000 dockerd[1097]: time="2023-06-15T16:34:13.793607046Z" level=info msg="shim disconnected" id=84f78f0eaf8dda7e1caea638bf41a2296e9ad897543494ce93c4491f7a6bcb47 namespace=moby
	Jun 15 16:34:13 addons-477000 dockerd[1091]: time="2023-06-15T16:34:13.793691952Z" level=info msg="ignoring event" container=84f78f0eaf8dda7e1caea638bf41a2296e9ad897543494ce93c4491f7a6bcb47 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 15 16:34:13 addons-477000 dockerd[1097]: time="2023-06-15T16:34:13.794164777Z" level=warning msg="cleaning up after shim disconnected" id=84f78f0eaf8dda7e1caea638bf41a2296e9ad897543494ce93c4491f7a6bcb47 namespace=moby
	Jun 15 16:34:13 addons-477000 dockerd[1097]: time="2023-06-15T16:34:13.794176995Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 15 16:34:14 addons-477000 dockerd[1091]: time="2023-06-15T16:34:14.817608715Z" level=info msg="ignoring event" container=08da339f6e51c0a36beb905cf51aaa6ebe12eed98abda59168295fcb62adf9d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 15 16:34:14 addons-477000 dockerd[1097]: time="2023-06-15T16:34:14.817442834Z" level=info msg="shim disconnected" id=08da339f6e51c0a36beb905cf51aaa6ebe12eed98abda59168295fcb62adf9d0 namespace=moby
	Jun 15 16:34:14 addons-477000 dockerd[1097]: time="2023-06-15T16:34:14.819525140Z" level=warning msg="cleaning up after shim disconnected" id=08da339f6e51c0a36beb905cf51aaa6ebe12eed98abda59168295fcb62adf9d0 namespace=moby
	Jun 15 16:34:14 addons-477000 dockerd[1097]: time="2023-06-15T16:34:14.819574276Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 15 16:34:15 addons-477000 dockerd[1097]: time="2023-06-15T16:34:15.579995441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 15 16:34:15 addons-477000 dockerd[1097]: time="2023-06-15T16:34:15.580279599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 16:34:15 addons-477000 dockerd[1097]: time="2023-06-15T16:34:15.580317285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 15 16:34:15 addons-477000 dockerd[1097]: time="2023-06-15T16:34:15.580357122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 16:34:15 addons-477000 cri-dockerd[991]: time="2023-06-15T16:34:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0c316c00ee755585c1753e0f1d6364e1731871da5d072484c67c43cac67cd349/resolv.conf as [nameserver 10.96.0.10 search gcp-auth.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 15 16:34:15 addons-477000 dockerd[1091]: time="2023-06-15T16:34:15.926803390Z" level=warning msg="reference for unknown type: " digest="sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf" remote="gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Jun 15 16:34:18 addons-477000 cri-dockerd[991]: time="2023-06-15T16:34:18Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf: Status: Downloaded newer image for gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf"
	Jun 15 16:34:18 addons-477000 dockerd[1097]: time="2023-06-15T16:34:18.269480431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 15 16:34:18 addons-477000 dockerd[1097]: time="2023-06-15T16:34:18.269881621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 16:34:18 addons-477000 dockerd[1097]: time="2023-06-15T16:34:18.269894061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 15 16:34:18 addons-477000 dockerd[1097]: time="2023-06-15T16:34:18.269898788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                     CREATED             STATE               NAME                         ATTEMPT             POD ID
	6a4bcd8ac64ff       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf              19 minutes ago      Running             gcp-auth                     0                   0c316c00ee755
	8527d6f42bef1       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   19 minutes ago      Running             volume-snapshot-controller   0                   f6bd41ad4abf6
	06a9dab9c48b6       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280   19 minutes ago      Running             volume-snapshot-controller   0                   629404aaee996
	256eaaad3894a       97e04611ad434                                                                                                             19 minutes ago      Running             coredns                      0                   f6fc2a0d05c4a
	29b72a92c6578       fb73e92641fd5                                                                                                             19 minutes ago      Running             kube-proxy                   0                   405ca9198a355
	733213e41e3b9       bcb9e554eaab6                                                                                                             20 minutes ago      Running             kube-scheduler               0                   25817e506c78b
	b11fb0f325644       39dfb036b0986                                                                                                             20 minutes ago      Running             kube-apiserver               0                   0dde73a500899
	66de98cb24ea0       ab3683b584ae5                                                                                                             20 minutes ago      Running             kube-controller-manager      0                   69ef168f52131
	41a6909f99a59       24bc64e911039                                                                                                             20 minutes ago      Running             etcd                         0                   9b969e901cc05
	
	* 
	* ==> coredns [256eaaad3894] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	[INFO] Reloading complete
	[INFO] 127.0.0.1:55502 - 31535 "HINFO IN 8156761713541019547.3807690688336836625. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.006087175s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-477000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-477000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1814abc5e40384accb8747bfb7e33027343c9627
	                    minikube.k8s.io/name=addons-477000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_15T09_33_47_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Jun 2023 16:33:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-477000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Jun 2023 16:53:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Jun 2023 16:50:10 +0000   Thu, 15 Jun 2023 16:33:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Jun 2023 16:50:10 +0000   Thu, 15 Jun 2023 16:33:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Jun 2023 16:50:10 +0000   Thu, 15 Jun 2023 16:33:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Jun 2023 16:50:10 +0000   Thu, 15 Jun 2023 16:33:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-477000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 5009a87f17804889a5a4616073b937e0
	  System UUID:                5009a87f17804889a5a4616073b937e0
	  Boot ID:                    9630f686-3c90-436f-98e6-d8c6686f510a
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-58478865f7-2pgxv                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 coredns-5d78c9869d-mds5s                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     19m
	  kube-system                 etcd-addons-477000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 kube-apiserver-addons-477000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-controller-manager-addons-477000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-proxy-8rgcs                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-addons-477000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 snapshot-controller-75bbb956b9-p6hk4     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 snapshot-controller-75bbb956b9-prqv8     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node addons-477000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node addons-477000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node addons-477000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m                kubelet          Node addons-477000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m                kubelet          Node addons-477000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m                kubelet          Node addons-477000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                20m                kubelet          Node addons-477000 status is now: NodeReady
	  Normal  RegisteredNode           19m                node-controller  Node addons-477000 event: Registered Node addons-477000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000000] KASLR disabled due to lack of seed
	[  +0.687452] EINJ: EINJ table not found.
	[  +0.627011] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043411] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000812] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.868427] systemd-fstab-generator[480]: Ignoring "noauto" for root device
	[  +0.067044] systemd-fstab-generator[491]: Ignoring "noauto" for root device
	[  +0.422105] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.174556] systemd-fstab-generator[729]: Ignoring "noauto" for root device
	[  +0.069729] systemd-fstab-generator[740]: Ignoring "noauto" for root device
	[  +0.066761] systemd-fstab-generator[753]: Ignoring "noauto" for root device
	[  +1.220689] systemd-fstab-generator[910]: Ignoring "noauto" for root device
	[  +0.067164] systemd-fstab-generator[921]: Ignoring "noauto" for root device
	[  +0.058616] systemd-fstab-generator[932]: Ignoring "noauto" for root device
	[  +0.062347] systemd-fstab-generator[943]: Ignoring "noauto" for root device
	[  +0.069889] systemd-fstab-generator[971]: Ignoring "noauto" for root device
	[  +2.576383] systemd-fstab-generator[1084]: Ignoring "noauto" for root device
	[  +1.530737] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.580821] systemd-fstab-generator[1404]: Ignoring "noauto" for root device
	[  +5.139726] systemd-fstab-generator[2250]: Ignoring "noauto" for root device
	[Jun15 16:34] kauditd_printk_skb: 41 callbacks suppressed
	[  +2.392909] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +4.125194] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.280200] kauditd_printk_skb: 10 callbacks suppressed
	[  +5.114632] kauditd_printk_skb: 3 callbacks suppressed
	
	* 
	* ==> etcd [41a6909f99a5] <==
	* {"level":"info","ts":"2023-06-15T16:33:43.362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2023-06-15T16:33:43.362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2023-06-15T16:33:43.362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-06-15T16:33:43.362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2023-06-15T16:33:43.362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2023-06-15T16:33:43.362Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-477000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-15T16:33:43.362Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-15T16:33:43.362Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-15T16:33:43.363Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-15T16:33:43.373Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-15T16:33:43.373Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-06-15T16:43:43.957Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":747}
	{"level":"info","ts":"2023-06-15T16:43:43.961Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":747,"took":"2.441695ms","hash":524925281}
	{"level":"info","ts":"2023-06-15T16:43:43.961Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":524925281,"revision":747,"compact-revision":-1}
	{"level":"info","ts":"2023-06-15T16:48:43.971Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":897}
	{"level":"info","ts":"2023-06-15T16:48:43.973Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":897,"took":"1.284283ms","hash":2514030906}
	{"level":"info","ts":"2023-06-15T16:48:43.973Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2514030906,"revision":897,"compact-revision":747}
	{"level":"info","ts":"2023-06-15T16:53:43.979Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1048}
	{"level":"info","ts":"2023-06-15T16:53:43.981Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1048,"took":"857.726µs","hash":834622362}
	{"level":"info","ts":"2023-06-15T16:53:43.981Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":834622362,"revision":1048,"compact-revision":897}
	
	* 
	* ==> gcp-auth [6a4bcd8ac64f] <==
	* 2023/06/15 16:34:18 GCP Auth Webhook started!
	
	* 
	* ==> kernel <==
	*  16:53:52 up 20 min,  0 users,  load average: 0.26, 0.49, 0.40
	Linux addons-477000 5.10.57 #1 SMP PREEMPT Wed Jun 14 05:08:37 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [b11fb0f32564] <==
	* I0615 16:34:01.682170       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:34:01.699943       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:34:01.700307       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:34:01.707459       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:34:01.707488       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:34:07.699142       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs=map[IPv4:10.109.79.243]
	I0615 16:34:07.712516       1 controller.go:624] quota admission added evaluator for: jobs.batch
	I0615 16:38:44.748912       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:38:44.749316       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:38:44.757965       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:38:44.758323       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:43:44.761094       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:43:44.761179       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:43:44.769254       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:43:44.769477       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:43:44.769594       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:43:44.769676       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:48:44.750393       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:48:44.750930       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:48:44.765734       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:48:44.766097       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:53:44.751144       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:53:44.751426       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0615 16:53:44.766204       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0615 16:53:44.766395       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [66de98cb24ea] <==
	* I0615 16:34:13.731338       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:13.817171       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:14.755476       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:14.766120       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:14.820829       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:14.823621       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:14.825690       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0615 16:34:14.825754       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:14.826668       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:14.850370       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:15.758931       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:15.761497       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:15.764164       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0615 16:34:15.764226       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:15.766220       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:15.768259       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:29.767346       1 resource_quota_monitor.go:223] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0615 16:34:29.767460       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0615 16:34:29.868459       1 shared_informer.go:318] Caches are synced for resource quota
	I0615 16:34:30.190420       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0615 16:34:30.296099       1 shared_informer.go:318] Caches are synced for garbage collector
	I0615 16:34:44.034182       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:44.057184       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-create
	I0615 16:34:45.016712       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	I0615 16:34:45.039501       1 job_controller.go:523] enqueueing job gcp-auth/gcp-auth-certs-patch
	
	* 
	* ==> kube-proxy [29b72a92c657] <==
	* I0615 16:34:01.157223       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I0615 16:34:01.157274       1 server_others.go:110] "Detected node IP" address="192.168.105.2"
	I0615 16:34:01.157290       1 server_others.go:554] "Using iptables proxy"
	I0615 16:34:01.207136       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0615 16:34:01.207158       1 server_others.go:192] "Using iptables Proxier"
	I0615 16:34:01.207188       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0615 16:34:01.207493       1 server.go:658] "Version info" version="v1.27.3"
	I0615 16:34:01.207499       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0615 16:34:01.208029       1 config.go:188] "Starting service config controller"
	I0615 16:34:01.208049       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0615 16:34:01.208060       1 config.go:97] "Starting endpoint slice config controller"
	I0615 16:34:01.208062       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0615 16:34:01.209533       1 config.go:315] "Starting node config controller"
	I0615 16:34:01.209537       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0615 16:34:01.308743       1 shared_informer.go:318] Caches are synced for service config
	I0615 16:34:01.308782       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0615 16:34:01.309993       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [733213e41e3b] <==
	* W0615 16:33:44.753904       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0615 16:33:44.754011       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0615 16:33:44.754034       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0615 16:33:44.754072       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0615 16:33:44.754021       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0615 16:33:44.754081       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0615 16:33:44.754001       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0615 16:33:44.754100       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0615 16:33:44.754136       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0615 16:33:44.754145       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0615 16:33:45.605616       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0615 16:33:45.605673       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0615 16:33:45.647245       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0615 16:33:45.647292       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0615 16:33:45.699650       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0615 16:33:45.699699       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0615 16:33:45.702358       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0615 16:33:45.702403       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0615 16:33:45.718371       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0615 16:33:45.718408       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0615 16:33:45.723261       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0615 16:33:45.723281       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0615 16:33:45.755043       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0615 16:33:45.755066       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0615 16:33:46.350596       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-06-15 16:33:29 UTC, ends at Thu 2023-06-15 16:53:52 UTC. --
	Jun 15 16:48:47 addons-477000 kubelet[2256]: E0615 16:48:47.329501    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 16:48:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 16:48:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 16:48:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 15 16:49:47 addons-477000 kubelet[2256]: E0615 16:49:47.330743    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 16:49:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 16:49:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 16:49:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 15 16:50:47 addons-477000 kubelet[2256]: E0615 16:50:47.330157    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 16:50:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 16:50:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 16:50:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 15 16:51:47 addons-477000 kubelet[2256]: E0615 16:51:47.331536    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 16:51:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 16:51:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 16:51:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 15 16:52:47 addons-477000 kubelet[2256]: E0615 16:52:47.331030    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 16:52:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 16:52:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 16:52:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 15 16:53:47 addons-477000 kubelet[2256]: W0615 16:53:47.318167    2256 machine.go:65] Cannot read vendor id correctly, set empty.
	Jun 15 16:53:47 addons-477000 kubelet[2256]: E0615 16:53:47.330647    2256 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 16:53:47 addons-477000 kubelet[2256]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 16:53:47 addons-477000 kubelet[2256]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 16:53:47 addons-477000 kubelet[2256]:  > table=nat chain=KUBE-KUBELET-CANARY
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-477000 -n addons-477000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-477000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/CloudSpanner FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/CloudSpanner (832.32s)

                                                
                                    
x
+
TestAddons/serial (0s)

                                                
                                                
=== RUN   TestAddons/serial
addons_test.go:138: Unable to run more tests (deadline exceeded)
--- FAIL: TestAddons/serial (0.00s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (0s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-477000
addons_test.go:148: (dbg) Non-zero exit: out/minikube-darwin-arm64 stop -p addons-477000: context deadline exceeded (1.042µs)
addons_test.go:150: failed to stop minikube. args "out/minikube-darwin-arm64 stop -p addons-477000" : context deadline exceeded
addons_test.go:152: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-477000
addons_test.go:152: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-477000: context deadline exceeded (83ns)
addons_test.go:154: failed to enable dashboard addon: args "out/minikube-darwin-arm64 addons enable dashboard -p addons-477000" : context deadline exceeded
addons_test.go:156: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-477000
addons_test.go:156: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-477000: context deadline exceeded (42ns)
addons_test.go:158: failed to disable dashboard addon: args "out/minikube-darwin-arm64 addons disable dashboard -p addons-477000" : context deadline exceeded
addons_test.go:161: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-477000
addons_test.go:161: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable gvisor -p addons-477000: context deadline exceeded (42ns)
addons_test.go:163: failed to disable non-enabled addon: args "out/minikube-darwin-arm64 addons disable gvisor -p addons-477000" : context deadline exceeded
--- FAIL: TestAddons/StoppedEnableDisable (0.00s)

                                                
                                    
x
+
TestCertOptions (10.36s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-118000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-118000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (10.0806335s)

                                                
                                                
-- stdout --
	* [cert-options-118000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-options-118000 in cluster cert-options-118000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-118000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-118000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-118000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-118000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-118000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 89 (78.931625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-118000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-118000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 89
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-118000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-118000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-118000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 89 (40.517833ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-118000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-118000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 89
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-118000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2023-06-15 10:26:33.248891 -0700 PDT m=+3247.586377418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-118000 -n cert-options-118000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-118000 -n cert-options-118000: exit status 7 (28.911708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-118000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-118000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-118000
--- FAIL: TestCertOptions (10.36s)

                                                
                                    
x
+
TestCertExpiration (195.45s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-744000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-744000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.055674208s)

                                                
                                                
-- stdout --
	* [cert-expiration-744000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-expiration-744000 in cluster cert-expiration-744000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-744000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-744000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-744000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-744000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-744000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.224005583s)

                                                
                                                
-- stdout --
	* [cert-expiration-744000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-744000 in cluster cert-expiration-744000
	* Restarting existing qemu2 VM for "cert-expiration-744000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-744000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-744000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-744000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-744000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-744000 in cluster cert-expiration-744000
	* Restarting existing qemu2 VM for "cert-expiration-744000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-744000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-744000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-06-15 10:29:33.298791 -0700 PDT m=+3427.639192918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-744000 -n cert-expiration-744000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-744000 -n cert-expiration-744000: exit status 7 (72.847791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-744000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-744000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-744000
--- FAIL: TestCertExpiration (195.45s)

                                                
                                    
x
+
TestDockerFlags (10.37s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-374000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
E0615 10:26:17.267297    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/functional-822000/client.crt: no such file or directory
docker_test.go:45: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-374000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.128627208s)

                                                
                                                
-- stdout --
	* [docker-flags-374000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node docker-flags-374000 in cluster docker-flags-374000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-374000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:26:12.666095    4060 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:26:12.666233    4060 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:26:12.666236    4060 out.go:309] Setting ErrFile to fd 2...
	I0615 10:26:12.666239    4060 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:26:12.666306    4060 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:26:12.667405    4060 out.go:303] Setting JSON to false
	I0615 10:26:12.682465    4060 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3343,"bootTime":1686846629,"procs":438,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:26:12.682535    4060 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:26:12.687652    4060 out.go:177] * [docker-flags-374000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:26:12.695557    4060 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:26:12.695607    4060 notify.go:220] Checking for updates...
	I0615 10:26:12.699514    4060 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:26:12.702627    4060 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:26:12.705558    4060 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:26:12.708570    4060 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:26:12.711573    4060 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:26:12.715215    4060 config.go:182] Loaded profile config "force-systemd-flag-974000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:26:12.715316    4060 config.go:182] Loaded profile config "multinode-506000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:26:12.715374    4060 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:26:12.723531    4060 out.go:177] * Using the qemu2 driver based on user configuration
	I0615 10:26:12.726525    4060 start.go:297] selected driver: qemu2
	I0615 10:26:12.726530    4060 start.go:884] validating driver "qemu2" against <nil>
	I0615 10:26:12.726536    4060 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:26:12.728498    4060 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0615 10:26:12.731587    4060 out.go:177] * Automatically selected the socket_vmnet network
	I0615 10:26:12.734664    4060 start_flags.go:914] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0615 10:26:12.734689    4060 cni.go:84] Creating CNI manager for ""
	I0615 10:26:12.734695    4060 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 10:26:12.734699    4060 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0615 10:26:12.734705    4060 start_flags.go:319] config:
	{Name:docker-flags-374000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:docker-flags-374000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/ru
n/socket_vmnet StaticIP:}
	I0615 10:26:12.734822    4060 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:26:12.738539    4060 out.go:177] * Starting control plane node docker-flags-374000 in cluster docker-flags-374000
	I0615 10:26:12.746596    4060 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 10:26:12.746620    4060 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 10:26:12.746644    4060 cache.go:57] Caching tarball of preloaded images
	I0615 10:26:12.746694    4060 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 10:26:12.746700    4060 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 10:26:12.746776    4060 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/docker-flags-374000/config.json ...
	I0615 10:26:12.746788    4060 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/docker-flags-374000/config.json: {Name:mkab4baea1a653ad42f98b6a65daed66bfa0c1a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:26:12.746985    4060 start.go:365] acquiring machines lock for docker-flags-374000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:26:12.747015    4060 start.go:369] acquired machines lock for "docker-flags-374000" in 24.542µs
	I0615 10:26:12.747027    4060 start.go:93] Provisioning new machine with config: &{Name:docker-flags-374000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:docker-flags-374000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:26:12.747051    4060 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:26:12.755525    4060 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0615 10:26:12.771695    4060 start.go:159] libmachine.API.Create for "docker-flags-374000" (driver="qemu2")
	I0615 10:26:12.771713    4060 client.go:168] LocalClient.Create starting
	I0615 10:26:12.771790    4060 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:26:12.771808    4060 main.go:141] libmachine: Decoding PEM data...
	I0615 10:26:12.771821    4060 main.go:141] libmachine: Parsing certificate...
	I0615 10:26:12.771867    4060 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:26:12.771883    4060 main.go:141] libmachine: Decoding PEM data...
	I0615 10:26:12.771889    4060 main.go:141] libmachine: Parsing certificate...
	I0615 10:26:12.772219    4060 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:26:13.158739    4060 main.go:141] libmachine: Creating SSH key...
	I0615 10:26:13.228742    4060 main.go:141] libmachine: Creating Disk image...
	I0615 10:26:13.228748    4060 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:26:13.228899    4060 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/docker-flags-374000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/docker-flags-374000/disk.qcow2
	I0615 10:26:13.237387    4060 main.go:141] libmachine: STDOUT: 
	I0615 10:26:13.237400    4060 main.go:141] libmachine: STDERR: 
	I0615 10:26:13.237451    4060 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/docker-flags-374000/disk.qcow2 +20000M
	I0615 10:26:13.244572    4060 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:26:13.244584    4060 main.go:141] libmachine: STDERR: 
	I0615 10:26:13.244622    4060 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/docker-flags-374000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/docker-flags-374000/disk.qcow2
	I0615 10:26:13.244626    4060 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:26:13.244666    4060 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/docker-flags-374000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/docker-flags-374000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/docker-flags-374000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:e6:f8:fe:b4:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/docker-flags-374000/disk.qcow2
	I0615 10:26:13.246135    4060 main.go:141] libmachine: STDOUT: 
	I0615 10:26:13.246148    4060 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:26:13.246167    4060 client.go:171] LocalClient.Create took 474.457083ms
	I0615 10:26:15.248370    4060 start.go:128] duration metric: createHost completed in 2.501339917s
	I0615 10:26:15.248423    4060 start.go:83] releasing machines lock for "docker-flags-374000", held for 2.501440833s
	W0615 10:26:15.248482    4060 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:26:15.266471    4060 out.go:177] * Deleting "docker-flags-374000" in qemu2 ...
	W0615 10:26:15.281811    4060 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:26:15.281835    4060 start.go:687] Will try again in 5 seconds ...
	I0615 10:26:20.284028    4060 start.go:365] acquiring machines lock for docker-flags-374000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:26:20.491566    4060 start.go:369] acquired machines lock for "docker-flags-374000" in 207.411375ms
	I0615 10:26:20.491740    4060 start.go:93] Provisioning new machine with config: &{Name:docker-flags-374000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:docker-flags-374000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:26:20.492017    4060 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:26:20.497777    4060 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0615 10:26:20.543414    4060 start.go:159] libmachine.API.Create for "docker-flags-374000" (driver="qemu2")
	I0615 10:26:20.543460    4060 client.go:168] LocalClient.Create starting
	I0615 10:26:20.543583    4060 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:26:20.543636    4060 main.go:141] libmachine: Decoding PEM data...
	I0615 10:26:20.543652    4060 main.go:141] libmachine: Parsing certificate...
	I0615 10:26:20.543737    4060 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:26:20.543765    4060 main.go:141] libmachine: Decoding PEM data...
	I0615 10:26:20.543778    4060 main.go:141] libmachine: Parsing certificate...
	I0615 10:26:20.544257    4060 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:26:20.659677    4060 main.go:141] libmachine: Creating SSH key...
	I0615 10:26:20.709892    4060 main.go:141] libmachine: Creating Disk image...
	I0615 10:26:20.709898    4060 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:26:20.710049    4060 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/docker-flags-374000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/docker-flags-374000/disk.qcow2
	I0615 10:26:20.718565    4060 main.go:141] libmachine: STDOUT: 
	I0615 10:26:20.718578    4060 main.go:141] libmachine: STDERR: 
	I0615 10:26:20.718633    4060 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/docker-flags-374000/disk.qcow2 +20000M
	I0615 10:26:20.725669    4060 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:26:20.725679    4060 main.go:141] libmachine: STDERR: 
	I0615 10:26:20.725689    4060 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/docker-flags-374000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/docker-flags-374000/disk.qcow2
	I0615 10:26:20.725706    4060 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:26:20.725746    4060 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/docker-flags-374000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/docker-flags-374000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/docker-flags-374000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:ca:f8:31:2a:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/docker-flags-374000/disk.qcow2
	I0615 10:26:20.727233    4060 main.go:141] libmachine: STDOUT: 
	I0615 10:26:20.727247    4060 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:26:20.727259    4060 client.go:171] LocalClient.Create took 183.7975ms
	I0615 10:26:22.729440    4060 start.go:128] duration metric: createHost completed in 2.237411709s
	I0615 10:26:22.729485    4060 start.go:83] releasing machines lock for "docker-flags-374000", held for 2.237925042s
	W0615 10:26:22.729895    4060 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-374000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-374000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:26:22.739346    4060 out.go:177] 
	W0615 10:26:22.743415    4060 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:26:22.743439    4060 out.go:239] * 
	* 
	W0615 10:26:22.746249    4060 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 10:26:22.754285    4060 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:47: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-374000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:50: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-374000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-374000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 89 (77.966834ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-374000"

                                                
                                                
-- /stdout --
docker_test.go:52: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-374000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 89
docker_test.go:57: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-374000\"\n"*.
docker_test.go:57: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-374000\"\n"*.
docker_test.go:61: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-374000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:61: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-374000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 89 (43.692292ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-374000"

                                                
                                                
-- /stdout --
docker_test.go:63: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-374000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 89
docker_test.go:67: expected "out/minikube-darwin-arm64 -p docker-flags-374000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-374000\"\n"
panic.go:522: *** TestDockerFlags FAILED at 2023-06-15 10:26:22.892173 -0700 PDT m=+3237.229491626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-374000 -n docker-flags-374000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-374000 -n docker-flags-374000: exit status 7 (27.822375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-374000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-374000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-374000
--- FAIL: TestDockerFlags (10.37s)

                                                
                                    
x
+
TestForceSystemdFlag (10.83s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-974000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-974000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.631255792s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-974000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-flag-974000 in cluster force-systemd-flag-974000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-974000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:26:07.214792    4039 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:26:07.214925    4039 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:26:07.214927    4039 out.go:309] Setting ErrFile to fd 2...
	I0615 10:26:07.214930    4039 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:26:07.214995    4039 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:26:07.216008    4039 out.go:303] Setting JSON to false
	I0615 10:26:07.231155    4039 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3338,"bootTime":1686846629,"procs":438,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:26:07.231211    4039 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:26:07.235936    4039 out.go:177] * [force-systemd-flag-974000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:26:07.243051    4039 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:26:07.247894    4039 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:26:07.243059    4039 notify.go:220] Checking for updates...
	I0615 10:26:07.254930    4039 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:26:07.257917    4039 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:26:07.260969    4039 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:26:07.263904    4039 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:26:07.267217    4039 config.go:182] Loaded profile config "force-systemd-env-276000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:26:07.267287    4039 config.go:182] Loaded profile config "multinode-506000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:26:07.267323    4039 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:26:07.271933    4039 out.go:177] * Using the qemu2 driver based on user configuration
	I0615 10:26:07.278934    4039 start.go:297] selected driver: qemu2
	I0615 10:26:07.278944    4039 start.go:884] validating driver "qemu2" against <nil>
	I0615 10:26:07.278957    4039 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:26:07.280713    4039 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0615 10:26:07.283947    4039 out.go:177] * Automatically selected the socket_vmnet network
	I0615 10:26:07.285447    4039 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0615 10:26:07.285467    4039 cni.go:84] Creating CNI manager for ""
	I0615 10:26:07.285474    4039 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 10:26:07.285478    4039 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0615 10:26:07.285486    4039 start_flags.go:319] config:
	{Name:force-systemd-flag-974000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:force-systemd-flag-974000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:26:07.285583    4039 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:26:07.288950    4039 out.go:177] * Starting control plane node force-systemd-flag-974000 in cluster force-systemd-flag-974000
	I0615 10:26:07.296887    4039 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 10:26:07.296911    4039 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 10:26:07.296920    4039 cache.go:57] Caching tarball of preloaded images
	I0615 10:26:07.296968    4039 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 10:26:07.296973    4039 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 10:26:07.297019    4039 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/force-systemd-flag-974000/config.json ...
	I0615 10:26:07.297029    4039 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/force-systemd-flag-974000/config.json: {Name:mk9f827c787a14283bf1b758ac35a2c902f56c76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:26:07.297226    4039 start.go:365] acquiring machines lock for force-systemd-flag-974000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:26:07.297254    4039 start.go:369] acquired machines lock for "force-systemd-flag-974000" in 22.583µs
	I0615 10:26:07.297266    4039 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-974000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:force-systemd-flag-974000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:26:07.297292    4039 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:26:07.307930    4039 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0615 10:26:07.323787    4039 start.go:159] libmachine.API.Create for "force-systemd-flag-974000" (driver="qemu2")
	I0615 10:26:07.323818    4039 client.go:168] LocalClient.Create starting
	I0615 10:26:07.323871    4039 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:26:07.323890    4039 main.go:141] libmachine: Decoding PEM data...
	I0615 10:26:07.323900    4039 main.go:141] libmachine: Parsing certificate...
	I0615 10:26:07.323957    4039 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:26:07.323972    4039 main.go:141] libmachine: Decoding PEM data...
	I0615 10:26:07.323980    4039 main.go:141] libmachine: Parsing certificate...
	I0615 10:26:07.324302    4039 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:26:07.514596    4039 main.go:141] libmachine: Creating SSH key...
	I0615 10:26:07.577254    4039 main.go:141] libmachine: Creating Disk image...
	I0615 10:26:07.577260    4039 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:26:07.577393    4039 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-flag-974000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-flag-974000/disk.qcow2
	I0615 10:26:07.585949    4039 main.go:141] libmachine: STDOUT: 
	I0615 10:26:07.585973    4039 main.go:141] libmachine: STDERR: 
	I0615 10:26:07.586031    4039 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-flag-974000/disk.qcow2 +20000M
	I0615 10:26:07.593122    4039 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:26:07.593135    4039 main.go:141] libmachine: STDERR: 
	I0615 10:26:07.593158    4039 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-flag-974000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-flag-974000/disk.qcow2
	I0615 10:26:07.593164    4039 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:26:07.593201    4039 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-flag-974000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-flag-974000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-flag-974000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:2d:0a:b6:10:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-flag-974000/disk.qcow2
	I0615 10:26:07.594675    4039 main.go:141] libmachine: STDOUT: 
	I0615 10:26:07.594687    4039 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:26:07.594707    4039 client.go:171] LocalClient.Create took 270.889125ms
	I0615 10:26:09.596932    4039 start.go:128] duration metric: createHost completed in 2.29965925s
	I0615 10:26:09.596993    4039 start.go:83] releasing machines lock for "force-systemd-flag-974000", held for 2.299766875s
	W0615 10:26:09.597081    4039 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:26:09.609322    4039 out.go:177] * Deleting "force-systemd-flag-974000" in qemu2 ...
	W0615 10:26:09.629035    4039 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:26:09.629058    4039 start.go:687] Will try again in 5 seconds ...
	I0615 10:26:14.631192    4039 start.go:365] acquiring machines lock for force-systemd-flag-974000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:26:15.248546    4039 start.go:369] acquired machines lock for "force-systemd-flag-974000" in 617.219542ms
	I0615 10:26:15.248756    4039 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-974000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:force-systemd-flag-974000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:26:15.249062    4039 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:26:15.257525    4039 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0615 10:26:15.303644    4039 start.go:159] libmachine.API.Create for "force-systemd-flag-974000" (driver="qemu2")
	I0615 10:26:15.303687    4039 client.go:168] LocalClient.Create starting
	I0615 10:26:15.303846    4039 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:26:15.303894    4039 main.go:141] libmachine: Decoding PEM data...
	I0615 10:26:15.303921    4039 main.go:141] libmachine: Parsing certificate...
	I0615 10:26:15.304028    4039 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:26:15.304067    4039 main.go:141] libmachine: Decoding PEM data...
	I0615 10:26:15.304093    4039 main.go:141] libmachine: Parsing certificate...
	I0615 10:26:15.304767    4039 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:26:15.669311    4039 main.go:141] libmachine: Creating SSH key...
	I0615 10:26:15.761122    4039 main.go:141] libmachine: Creating Disk image...
	I0615 10:26:15.761129    4039 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:26:15.761282    4039 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-flag-974000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-flag-974000/disk.qcow2
	I0615 10:26:15.769837    4039 main.go:141] libmachine: STDOUT: 
	I0615 10:26:15.769851    4039 main.go:141] libmachine: STDERR: 
	I0615 10:26:15.769912    4039 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-flag-974000/disk.qcow2 +20000M
	I0615 10:26:15.777064    4039 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:26:15.777082    4039 main.go:141] libmachine: STDERR: 
	I0615 10:26:15.777094    4039 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-flag-974000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-flag-974000/disk.qcow2
	I0615 10:26:15.777099    4039 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:26:15.777140    4039 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-flag-974000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-flag-974000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-flag-974000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:0b:05:2a:3e:6e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-flag-974000/disk.qcow2
	I0615 10:26:15.778699    4039 main.go:141] libmachine: STDOUT: 
	I0615 10:26:15.778712    4039 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:26:15.778724    4039 client.go:171] LocalClient.Create took 475.039667ms
	I0615 10:26:17.780871    4039 start.go:128] duration metric: createHost completed in 2.531826708s
	I0615 10:26:17.780940    4039 start.go:83] releasing machines lock for "force-systemd-flag-974000", held for 2.532401458s
	W0615 10:26:17.781389    4039 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-974000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-974000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:26:17.790034    4039 out.go:177] 
	W0615 10:26:17.794091    4039 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:26:17.794149    4039 out.go:239] * 
	* 
	W0615 10:26:17.796791    4039 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 10:26:17.806032    4039 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:87: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-974000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:104: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-974000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-974000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (74.530917ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-flag-974000"

                                                
                                                
-- /stdout --
docker_test.go:106: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-974000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:100: *** TestForceSystemdFlag FAILED at 2023-06-15 10:26:17.896935 -0700 PDT m=+3232.234172293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-974000 -n force-systemd-flag-974000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-974000 -n force-systemd-flag-974000: exit status 7 (33.500208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-974000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-974000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-974000
--- FAIL: TestForceSystemdFlag (10.83s)

                                                
                                    
x
+
TestForceSystemdEnv (10.34s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-276000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-276000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.133914292s)

                                                
                                                
-- stdout --
	* [force-systemd-env-276000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-env-276000 in cluster force-systemd-env-276000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-276000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:26:02.328538    4007 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:26:02.328688    4007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:26:02.328691    4007 out.go:309] Setting ErrFile to fd 2...
	I0615 10:26:02.328694    4007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:26:02.328771    4007 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:26:02.329825    4007 out.go:303] Setting JSON to false
	I0615 10:26:02.345009    4007 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3333,"bootTime":1686846629,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:26:02.345077    4007 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:26:02.350576    4007 out.go:177] * [force-systemd-env-276000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:26:02.358618    4007 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:26:02.362606    4007 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:26:02.358672    4007 notify.go:220] Checking for updates...
	I0615 10:26:02.368634    4007 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:26:02.371606    4007 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:26:02.374591    4007 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:26:02.377510    4007 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0615 10:26:02.380922    4007 config.go:182] Loaded profile config "multinode-506000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:26:02.380968    4007 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:26:02.385575    4007 out.go:177] * Using the qemu2 driver based on user configuration
	I0615 10:26:02.392515    4007 start.go:297] selected driver: qemu2
	I0615 10:26:02.392518    4007 start.go:884] validating driver "qemu2" against <nil>
	I0615 10:26:02.392527    4007 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:26:02.394461    4007 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0615 10:26:02.397564    4007 out.go:177] * Automatically selected the socket_vmnet network
	I0615 10:26:02.400660    4007 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0615 10:26:02.400686    4007 cni.go:84] Creating CNI manager for ""
	I0615 10:26:02.400692    4007 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 10:26:02.400696    4007 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0615 10:26:02.400702    4007 start_flags.go:319] config:
	{Name:force-systemd-env-276000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:force-systemd-env-276000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:26:02.400794    4007 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:26:02.408495    4007 out.go:177] * Starting control plane node force-systemd-env-276000 in cluster force-systemd-env-276000
	I0615 10:26:02.412568    4007 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 10:26:02.412592    4007 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 10:26:02.412608    4007 cache.go:57] Caching tarball of preloaded images
	I0615 10:26:02.412663    4007 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 10:26:02.412668    4007 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 10:26:02.412728    4007 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/force-systemd-env-276000/config.json ...
	I0615 10:26:02.412740    4007 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/force-systemd-env-276000/config.json: {Name:mkd2302be91de43f199dbc405ad927c1bc9fddd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:26:02.412949    4007 start.go:365] acquiring machines lock for force-systemd-env-276000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:26:02.412984    4007 start.go:369] acquired machines lock for "force-systemd-env-276000" in 24.792µs
	I0615 10:26:02.412997    4007 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-276000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.27.3 ClusterName:force-systemd-env-276000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:26:02.413027    4007 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:26:02.421515    4007 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0615 10:26:02.438729    4007 start.go:159] libmachine.API.Create for "force-systemd-env-276000" (driver="qemu2")
	I0615 10:26:02.438753    4007 client.go:168] LocalClient.Create starting
	I0615 10:26:02.438825    4007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:26:02.438847    4007 main.go:141] libmachine: Decoding PEM data...
	I0615 10:26:02.438856    4007 main.go:141] libmachine: Parsing certificate...
	I0615 10:26:02.438902    4007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:26:02.438917    4007 main.go:141] libmachine: Decoding PEM data...
	I0615 10:26:02.438927    4007 main.go:141] libmachine: Parsing certificate...
	I0615 10:26:02.439276    4007 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:26:02.790752    4007 main.go:141] libmachine: Creating SSH key...
	I0615 10:26:02.949216    4007 main.go:141] libmachine: Creating Disk image...
	I0615 10:26:02.949228    4007 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:26:02.949369    4007 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-env-276000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-env-276000/disk.qcow2
	I0615 10:26:02.958041    4007 main.go:141] libmachine: STDOUT: 
	I0615 10:26:02.958056    4007 main.go:141] libmachine: STDERR: 
	I0615 10:26:02.958109    4007 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-env-276000/disk.qcow2 +20000M
	I0615 10:26:02.965539    4007 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:26:02.965561    4007 main.go:141] libmachine: STDERR: 
	I0615 10:26:02.965584    4007 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-env-276000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-env-276000/disk.qcow2
	I0615 10:26:02.965589    4007 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:26:02.965631    4007 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-env-276000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-env-276000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-env-276000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:85:5a:74:3a:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-env-276000/disk.qcow2
	I0615 10:26:02.967176    4007 main.go:141] libmachine: STDOUT: 
	I0615 10:26:02.967190    4007 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:26:02.967210    4007 client.go:171] LocalClient.Create took 528.460375ms
	I0615 10:26:04.969396    4007 start.go:128] duration metric: createHost completed in 2.556348417s
	I0615 10:26:04.969472    4007 start.go:83] releasing machines lock for "force-systemd-env-276000", held for 2.55651875s
	W0615 10:26:04.969559    4007 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:26:04.978645    4007 out.go:177] * Deleting "force-systemd-env-276000" in qemu2 ...
	W0615 10:26:04.999345    4007 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:26:04.999375    4007 start.go:687] Will try again in 5 seconds ...
	I0615 10:26:10.001429    4007 start.go:365] acquiring machines lock for force-systemd-env-276000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:26:10.002046    4007 start.go:369] acquired machines lock for "force-systemd-env-276000" in 501.5µs
	I0615 10:26:10.002195    4007 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-276000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.27.3 ClusterName:force-systemd-env-276000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:26:10.002523    4007 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:26:10.008059    4007 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0615 10:26:10.057591    4007 start.go:159] libmachine.API.Create for "force-systemd-env-276000" (driver="qemu2")
	I0615 10:26:10.057645    4007 client.go:168] LocalClient.Create starting
	I0615 10:26:10.057802    4007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:26:10.057855    4007 main.go:141] libmachine: Decoding PEM data...
	I0615 10:26:10.057880    4007 main.go:141] libmachine: Parsing certificate...
	I0615 10:26:10.057973    4007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:26:10.058006    4007 main.go:141] libmachine: Decoding PEM data...
	I0615 10:26:10.058029    4007 main.go:141] libmachine: Parsing certificate...
	I0615 10:26:10.058765    4007 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:26:10.273888    4007 main.go:141] libmachine: Creating SSH key...
	I0615 10:26:10.378537    4007 main.go:141] libmachine: Creating Disk image...
	I0615 10:26:10.378546    4007 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:26:10.378689    4007 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-env-276000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-env-276000/disk.qcow2
	I0615 10:26:10.387075    4007 main.go:141] libmachine: STDOUT: 
	I0615 10:26:10.387088    4007 main.go:141] libmachine: STDERR: 
	I0615 10:26:10.387168    4007 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-env-276000/disk.qcow2 +20000M
	I0615 10:26:10.394238    4007 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:26:10.394250    4007 main.go:141] libmachine: STDERR: 
	I0615 10:26:10.394270    4007 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-env-276000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-env-276000/disk.qcow2
	I0615 10:26:10.394275    4007 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:26:10.394323    4007 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-env-276000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-env-276000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-env-276000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:0a:70:75:fc:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/force-systemd-env-276000/disk.qcow2
	I0615 10:26:10.395844    4007 main.go:141] libmachine: STDOUT: 
	I0615 10:26:10.395857    4007 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:26:10.395874    4007 client.go:171] LocalClient.Create took 338.223625ms
	I0615 10:26:12.398095    4007 start.go:128] duration metric: createHost completed in 2.395588458s
	I0615 10:26:12.398147    4007 start.go:83] releasing machines lock for "force-systemd-env-276000", held for 2.396103959s
	W0615 10:26:12.398544    4007 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-276000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-276000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:26:12.406007    4007 out.go:177] 
	W0615 10:26:12.409981    4007 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:26:12.410012    4007 out.go:239] * 
	* 
	W0615 10:26:12.412629    4007 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 10:26:12.419979    4007 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:151: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-276000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:104: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-276000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-276000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (75.983583ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-env-276000"

                                                
                                                
-- /stdout --
docker_test.go:106: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-276000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:160: *** TestForceSystemdEnv FAILED at 2023-06-15 10:26:12.512588 -0700 PDT m=+3226.849738209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-276000 -n force-systemd-env-276000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-276000 -n force-systemd-env-276000: exit status 7 (33.639167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-276000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-276000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-276000
--- FAIL: TestForceSystemdEnv (10.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (34.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-822000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-822000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58d66798bb-69rxt" [2df51cd2-c003-4e7a-aee9-ae1934b81b32] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-58d66798bb-69rxt" [2df51cd2-c003-4e7a-aee9-ae1934b81b32] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.011940291s
functional_test.go:1648: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.105.4:32239
functional_test.go:1660: error fetching http://192.168.105.4:32239: Get "http://192.168.105.4:32239": dial tcp 192.168.105.4:32239: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32239: Get "http://192.168.105.4:32239": dial tcp 192.168.105.4:32239: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32239: Get "http://192.168.105.4:32239": dial tcp 192.168.105.4:32239: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32239: Get "http://192.168.105.4:32239": dial tcp 192.168.105.4:32239: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32239: Get "http://192.168.105.4:32239": dial tcp 192.168.105.4:32239: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32239: Get "http://192.168.105.4:32239": dial tcp 192.168.105.4:32239: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32239: Get "http://192.168.105.4:32239": dial tcp 192.168.105.4:32239: connect: connection refused
functional_test.go:1680: failed to fetch http://192.168.105.4:32239: Get "http://192.168.105.4:32239": dial tcp 192.168.105.4:32239: connect: connection refused
functional_test.go:1597: service test failed - dumping debug information
functional_test.go:1598: -----------------------service failure post-mortem--------------------------------
functional_test.go:1601: (dbg) Run:  kubectl --context functional-822000 describe po hello-node-connect
functional_test.go:1605: hello-node pod describe:
Name:             hello-node-connect-58d66798bb-69rxt
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-822000/192.168.105.4
Start Time:       Thu, 15 Jun 2023 10:16:28 -0700
Labels:           app=hello-node-connect
pod-template-hash=58d66798bb
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-58d66798bb
Containers:
echoserver-arm:
Container ID:   docker://b47a7bf7af32f2c05be43a14b7c475b9fbe761061001bc0482a4f3322360cfb4
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Thu, 15 Jun 2023 10:16:48 -0700
Finished:     Thu, 15 Jun 2023 10:16:48 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x5lsp (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-x5lsp:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  34s                default-scheduler  Successfully assigned default/hello-node-connect-58d66798bb-69rxt to functional-822000
Normal   Pulling    33s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     29s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 4.41841637s (4.418444369s including waiting)
Normal   Created    14s (x3 over 29s)  kubelet            Created container echoserver-arm
Normal   Started    14s (x3 over 28s)  kubelet            Started container echoserver-arm
Normal   Pulled     14s (x2 over 28s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    1s (x4 over 27s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-58d66798bb-69rxt_default(2df51cd2-c003-4e7a-aee9-ae1934b81b32)

                                                
                                                
functional_test.go:1607: (dbg) Run:  kubectl --context functional-822000 logs -l app=hello-node-connect
functional_test.go:1611: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1613: (dbg) Run:  kubectl --context functional-822000 describe svc hello-node-connect
functional_test.go:1617: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.104.130.152
IPs:                      10.104.130.152
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32239/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-822000 -n functional-822000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                  Args                                                  |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh     | functional-822000                                                                                      | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:15 PDT | 15 Jun 23 10:15 PDT |
	|         | ssh sudo docker rmi                                                                                    |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-822000 ssh                                                                                  | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:15 PDT |                     |
	|         | sudo crictl inspecti                                                                                   |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                                                           |                   |         |         |                     |                     |
	| cache   | functional-822000 cache reload                                                                         | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:15 PDT | 15 Jun 23 10:15 PDT |
	| ssh     | functional-822000 ssh                                                                                  | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:15 PDT | 15 Jun 23 10:15 PDT |
	|         | sudo crictl inspecti                                                                                   |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                                                           |                   |         |         |                     |                     |
	| cache   | delete                                                                                                 | minikube          | jenkins | v1.30.1 | 15 Jun 23 10:15 PDT | 15 Jun 23 10:15 PDT |
	|         | registry.k8s.io/pause:3.1                                                                              |                   |         |         |                     |                     |
	| cache   | delete                                                                                                 | minikube          | jenkins | v1.30.1 | 15 Jun 23 10:15 PDT | 15 Jun 23 10:15 PDT |
	|         | registry.k8s.io/pause:latest                                                                           |                   |         |         |                     |                     |
	| kubectl | functional-822000 kubectl --                                                                           | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:15 PDT | 15 Jun 23 10:15 PDT |
	|         | --context functional-822000                                                                            |                   |         |         |                     |                     |
	|         | get pods                                                                                               |                   |         |         |                     |                     |
	| start   | -p functional-822000                                                                                   | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:15 PDT | 15 Jun 23 10:16 PDT |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision                               |                   |         |         |                     |                     |
	|         | --wait=all                                                                                             |                   |         |         |                     |                     |
	| service | invalid-svc -p                                                                                         | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:16 PDT |                     |
	|         | functional-822000                                                                                      |                   |         |         |                     |                     |
	| config  | functional-822000 config unset                                                                         | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:16 PDT | 15 Jun 23 10:16 PDT |
	|         | cpus                                                                                                   |                   |         |         |                     |                     |
	| cp      | functional-822000 cp                                                                                   | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:16 PDT | 15 Jun 23 10:16 PDT |
	|         | testdata/cp-test.txt                                                                                   |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                               |                   |         |         |                     |                     |
	| config  | functional-822000 config get                                                                           | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:16 PDT |                     |
	|         | cpus                                                                                                   |                   |         |         |                     |                     |
	| config  | functional-822000 config unset                                                                         | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:16 PDT | 15 Jun 23 10:16 PDT |
	|         | cpus                                                                                                   |                   |         |         |                     |                     |
	| cp      | functional-822000 cp functional-822000:/home/docker/cp-test.txt                                        | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:16 PDT | 15 Jun 23 10:16 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2988643683/001/cp-test.txt |                   |         |         |                     |                     |
	| config  | functional-822000 config get                                                                           | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:16 PDT |                     |
	|         | cpus                                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-822000 ssh echo                                                                             | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:16 PDT | 15 Jun 23 10:16 PDT |
	|         | hello                                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-822000 ssh -n                                                                               | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:16 PDT | 15 Jun 23 10:16 PDT |
	|         | functional-822000 sudo cat                                                                             |                   |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-822000 ssh cat                                                                              | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:16 PDT | 15 Jun 23 10:16 PDT |
	|         | /etc/hostname                                                                                          |                   |         |         |                     |                     |
	| tunnel  | functional-822000 tunnel                                                                               | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:16 PDT |                     |
	|         | --alsologtostderr                                                                                      |                   |         |         |                     |                     |
	| tunnel  | functional-822000 tunnel                                                                               | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:16 PDT |                     |
	|         | --alsologtostderr                                                                                      |                   |         |         |                     |                     |
	| tunnel  | functional-822000 tunnel                                                                               | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:16 PDT |                     |
	|         | --alsologtostderr                                                                                      |                   |         |         |                     |                     |
	| addons  | functional-822000 addons list                                                                          | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:16 PDT | 15 Jun 23 10:16 PDT |
	| addons  | functional-822000 addons list                                                                          | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:16 PDT | 15 Jun 23 10:16 PDT |
	|         | -o json                                                                                                |                   |         |         |                     |                     |
	| service | functional-822000 service                                                                              | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:16 PDT | 15 Jun 23 10:16 PDT |
	|         | hello-node-connect --url                                                                               |                   |         |         |                     |                     |
	| service | functional-822000 service list                                                                         | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:16 PDT | 15 Jun 23 10:16 PDT |
	|---------|--------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/15 10:15:34
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0615 10:15:34.422033    2743 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:15:34.422145    2743 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:15:34.422146    2743 out.go:309] Setting ErrFile to fd 2...
	I0615 10:15:34.422148    2743 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:15:34.422209    2743 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:15:34.423215    2743 out.go:303] Setting JSON to false
	I0615 10:15:34.438943    2743 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2705,"bootTime":1686846629,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:15:34.439014    2743 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:15:34.443174    2743 out.go:177] * [functional-822000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:15:34.450399    2743 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:15:34.454250    2743 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:15:34.450480    2743 notify.go:220] Checking for updates...
	I0615 10:15:34.460307    2743 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:15:34.463227    2743 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:15:34.466320    2743 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:15:34.469336    2743 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:15:34.470966    2743 config.go:182] Loaded profile config "functional-822000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:15:34.471012    2743 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:15:34.475298    2743 out.go:177] * Using the qemu2 driver based on existing profile
	I0615 10:15:34.482172    2743 start.go:297] selected driver: qemu2
	I0615 10:15:34.482175    2743 start.go:884] validating driver "qemu2" against &{Name:functional-822000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:functional-822000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:15:34.482238    2743 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:15:34.484057    2743 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 10:15:34.484077    2743 cni.go:84] Creating CNI manager for ""
	I0615 10:15:34.484082    2743 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 10:15:34.484087    2743 start_flags.go:319] config:
	{Name:functional-822000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:functional-822000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:15:34.484429    2743 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:15:34.488351    2743 out.go:177] * Starting control plane node functional-822000 in cluster functional-822000
	I0615 10:15:34.496283    2743 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 10:15:34.496306    2743 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 10:15:34.496316    2743 cache.go:57] Caching tarball of preloaded images
	I0615 10:15:34.496371    2743 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 10:15:34.496375    2743 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 10:15:34.496435    2743 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/functional-822000/config.json ...
	I0615 10:15:34.496780    2743 start.go:365] acquiring machines lock for functional-822000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:15:34.496808    2743 start.go:369] acquired machines lock for "functional-822000" in 23.708µs
	I0615 10:15:34.496815    2743 start.go:96] Skipping create...Using existing machine configuration
	I0615 10:15:34.496817    2743 fix.go:54] fixHost starting: 
	I0615 10:15:34.497432    2743 fix.go:102] recreateIfNeeded on functional-822000: state=Running err=<nil>
	W0615 10:15:34.497438    2743 fix.go:128] unexpected machine state, will restart: <nil>
	I0615 10:15:34.502332    2743 out.go:177] * Updating the running qemu2 "functional-822000" VM ...
	I0615 10:15:34.510309    2743 machine.go:88] provisioning docker machine ...
	I0615 10:15:34.510322    2743 buildroot.go:166] provisioning hostname "functional-822000"
	I0615 10:15:34.510384    2743 main.go:141] libmachine: Using SSH client type: native
	I0615 10:15:34.510680    2743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104784e20] 0x104787880 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0615 10:15:34.510684    2743 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-822000 && echo "functional-822000" | sudo tee /etc/hostname
	I0615 10:15:34.575905    2743 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-822000
	
	I0615 10:15:34.575932    2743 main.go:141] libmachine: Using SSH client type: native
	I0615 10:15:34.576155    2743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104784e20] 0x104787880 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0615 10:15:34.576162    2743 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-822000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-822000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-822000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0615 10:15:34.636896    2743 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0615 10:15:34.636902    2743 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16718-868/.minikube CaCertPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16718-868/.minikube}
	I0615 10:15:34.636908    2743 buildroot.go:174] setting up certificates
	I0615 10:15:34.636913    2743 provision.go:83] configureAuth start
	I0615 10:15:34.636917    2743 provision.go:138] copyHostCerts
	I0615 10:15:34.636982    2743 exec_runner.go:144] found /Users/jenkins/minikube-integration/16718-868/.minikube/key.pem, removing ...
	I0615 10:15:34.636985    2743 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16718-868/.minikube/key.pem
	I0615 10:15:34.637076    2743 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16718-868/.minikube/key.pem (1679 bytes)
	I0615 10:15:34.637258    2743 exec_runner.go:144] found /Users/jenkins/minikube-integration/16718-868/.minikube/ca.pem, removing ...
	I0615 10:15:34.637259    2743 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16718-868/.minikube/ca.pem
	I0615 10:15:34.637315    2743 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16718-868/.minikube/ca.pem (1078 bytes)
	I0615 10:15:34.637413    2743 exec_runner.go:144] found /Users/jenkins/minikube-integration/16718-868/.minikube/cert.pem, removing ...
	I0615 10:15:34.637417    2743 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16718-868/.minikube/cert.pem
	I0615 10:15:34.637451    2743 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16718-868/.minikube/cert.pem (1123 bytes)
	I0615 10:15:34.637521    2743 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16718-868/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca-key.pem org=jenkins.functional-822000 san=[192.168.105.4 192.168.105.4 localhost 127.0.0.1 minikube functional-822000]
	I0615 10:15:34.883575    2743 provision.go:172] copyRemoteCerts
	I0615 10:15:34.883629    2743 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0615 10:15:34.883637    2743 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/functional-822000/id_rsa Username:docker}
	I0615 10:15:34.914310    2743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0615 10:15:34.921587    2743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0615 10:15:34.928519    2743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0615 10:15:34.935660    2743 provision.go:86] duration metric: configureAuth took 298.743833ms
	I0615 10:15:34.935672    2743 buildroot.go:189] setting minikube options for container-runtime
	I0615 10:15:34.935790    2743 config.go:182] Loaded profile config "functional-822000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:15:34.935834    2743 main.go:141] libmachine: Using SSH client type: native
	I0615 10:15:34.936050    2743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104784e20] 0x104787880 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0615 10:15:34.936053    2743 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0615 10:15:34.991897    2743 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0615 10:15:34.991904    2743 buildroot.go:70] root file system type: tmpfs
	I0615 10:15:34.992165    2743 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0615 10:15:34.992218    2743 main.go:141] libmachine: Using SSH client type: native
	I0615 10:15:34.992468    2743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104784e20] 0x104787880 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0615 10:15:34.992499    2743 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0615 10:15:35.054984    2743 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0615 10:15:35.055034    2743 main.go:141] libmachine: Using SSH client type: native
	I0615 10:15:35.055267    2743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104784e20] 0x104787880 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0615 10:15:35.055274    2743 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0615 10:15:35.113574    2743 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0615 10:15:35.113580    2743 machine.go:91] provisioned docker machine in 603.268417ms
	I0615 10:15:35.113583    2743 start.go:300] post-start starting for "functional-822000" (driver="qemu2")
	I0615 10:15:35.113588    2743 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0615 10:15:35.113634    2743 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0615 10:15:35.113640    2743 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/functional-822000/id_rsa Username:docker}
	I0615 10:15:35.145895    2743 ssh_runner.go:195] Run: cat /etc/os-release
	I0615 10:15:35.147550    2743 info.go:137] Remote host: Buildroot 2021.02.12
	I0615 10:15:35.147554    2743 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16718-868/.minikube/addons for local assets ...
	I0615 10:15:35.147612    2743 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16718-868/.minikube/files for local assets ...
	I0615 10:15:35.147723    2743 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16718-868/.minikube/files/etc/ssl/certs/13132.pem -> 13132.pem in /etc/ssl/certs
	I0615 10:15:35.147824    2743 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16718-868/.minikube/files/etc/test/nested/copy/1313/hosts -> hosts in /etc/test/nested/copy/1313
	I0615 10:15:35.147851    2743 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1313
	I0615 10:15:35.157720    2743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/files/etc/ssl/certs/13132.pem --> /etc/ssl/certs/13132.pem (1708 bytes)
	I0615 10:15:35.164607    2743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/files/etc/test/nested/copy/1313/hosts --> /etc/test/nested/copy/1313/hosts (40 bytes)
	I0615 10:15:35.171646    2743 start.go:303] post-start completed in 58.057334ms
	I0615 10:15:35.171651    2743 fix.go:56] fixHost completed within 674.834542ms
	I0615 10:15:35.171723    2743 main.go:141] libmachine: Using SSH client type: native
	I0615 10:15:35.171970    2743 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104784e20] 0x104787880 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0615 10:15:35.171973    2743 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0615 10:15:35.226912    2743 main.go:141] libmachine: SSH cmd err, output: <nil>: 1686849335.208326508
	
	I0615 10:15:35.226917    2743 fix.go:206] guest clock: 1686849335.208326508
	I0615 10:15:35.226920    2743 fix.go:219] Guest: 2023-06-15 10:15:35.208326508 -0700 PDT Remote: 2023-06-15 10:15:35.171651 -0700 PDT m=+0.768285876 (delta=36.675508ms)
	I0615 10:15:35.226929    2743 fix.go:190] guest clock delta is within tolerance: 36.675508ms
	I0615 10:15:35.226931    2743 start.go:83] releasing machines lock for "functional-822000", held for 730.121416ms
	I0615 10:15:35.227218    2743 ssh_runner.go:195] Run: cat /version.json
	I0615 10:15:35.227224    2743 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/functional-822000/id_rsa Username:docker}
	I0615 10:15:35.227242    2743 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0615 10:15:35.227260    2743 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/functional-822000/id_rsa Username:docker}
	I0615 10:15:35.298927    2743 ssh_runner.go:195] Run: systemctl --version
	I0615 10:15:35.300805    2743 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0615 10:15:35.302600    2743 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0615 10:15:35.302620    2743 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0615 10:15:35.305324    2743 cni.go:265] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0615 10:15:35.305328    2743 start.go:466] detecting cgroup driver to use...
	I0615 10:15:35.305387    2743 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0615 10:15:35.310572    2743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0615 10:15:35.313276    2743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0615 10:15:35.316444    2743 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0615 10:15:35.316468    2743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0615 10:15:35.319447    2743 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0615 10:15:35.322545    2743 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0615 10:15:35.325260    2743 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0615 10:15:35.328517    2743 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0615 10:15:35.331931    2743 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0615 10:15:35.335523    2743 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0615 10:15:35.338254    2743 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0615 10:15:35.340803    2743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 10:15:35.419787    2743 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0615 10:15:35.428394    2743 start.go:466] detecting cgroup driver to use...
	I0615 10:15:35.428447    2743 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0615 10:15:35.433599    2743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0615 10:15:35.439013    2743 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0615 10:15:35.447371    2743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0615 10:15:35.452493    2743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0615 10:15:35.457666    2743 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0615 10:15:35.463389    2743 ssh_runner.go:195] Run: which cri-dockerd
	I0615 10:15:35.464753    2743 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0615 10:15:35.467961    2743 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0615 10:15:35.473086    2743 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0615 10:15:35.557605    2743 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0615 10:15:35.643367    2743 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0615 10:15:35.643377    2743 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0615 10:15:35.648845    2743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 10:15:35.729700    2743 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0615 10:15:47.033997    2743 ssh_runner.go:235] Completed: sudo systemctl restart docker: (11.304306s)
	I0615 10:15:47.034068    2743 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0615 10:15:47.093858    2743 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0615 10:15:47.153019    2743 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0615 10:15:47.217975    2743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 10:15:47.280875    2743 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0615 10:15:47.288444    2743 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 10:15:47.374904    2743 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0615 10:15:47.401425    2743 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0615 10:15:47.401514    2743 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0615 10:15:47.404156    2743 start.go:534] Will wait 60s for crictl version
	I0615 10:15:47.404205    2743 ssh_runner.go:195] Run: which crictl
	I0615 10:15:47.405666    2743 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0615 10:15:47.417844    2743 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0615 10:15:47.417915    2743 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0615 10:15:47.425592    2743 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0615 10:15:47.437068    2743 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.2 ...
	I0615 10:15:47.437239    2743 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0615 10:15:47.444841    2743 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0615 10:15:47.449039    2743 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 10:15:47.449093    2743 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0615 10:15:47.454866    2743 docker.go:636] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-822000
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0615 10:15:47.454875    2743 docker.go:566] Images already preloaded, skipping extraction
	I0615 10:15:47.454923    2743 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0615 10:15:47.460340    2743 docker.go:636] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-822000
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0615 10:15:47.460346    2743 cache_images.go:84] Images are preloaded, skipping loading
	I0615 10:15:47.460403    2743 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0615 10:15:47.467608    2743 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0615 10:15:47.467624    2743 cni.go:84] Creating CNI manager for ""
	I0615 10:15:47.467628    2743 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 10:15:47.467632    2743 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0615 10:15:47.467640    2743 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-822000 NodeName:functional-822000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0615 10:15:47.467693    2743 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-822000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0615 10:15:47.467727    2743 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=functional-822000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:functional-822000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0615 10:15:47.467777    2743 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0615 10:15:47.471279    2743 binaries.go:44] Found k8s binaries, skipping transfer
	I0615 10:15:47.471302    2743 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0615 10:15:47.473994    2743 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0615 10:15:47.478934    2743 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0615 10:15:47.483876    2743 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1953 bytes)
	I0615 10:15:47.488833    2743 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I0615 10:15:47.490072    2743 certs.go:56] Setting up /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/functional-822000 for IP: 192.168.105.4
	I0615 10:15:47.490079    2743 certs.go:190] acquiring lock for shared ca certs: {Name:mk9ee4d7ca68f2cc32c8609d33f6ce33c43a91d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:15:47.490207    2743 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key
	I0615 10:15:47.490245    2743 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key
	I0615 10:15:47.490306    2743 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/functional-822000/client.key
	I0615 10:15:47.490348    2743 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/functional-822000/apiserver.key.942c473b
	I0615 10:15:47.490382    2743 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/functional-822000/proxy-client.key
	I0615 10:15:47.490524    2743 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/1313.pem (1338 bytes)
	W0615 10:15:47.490546    2743 certs.go:433] ignoring /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/1313_empty.pem, impossibly tiny 0 bytes
	I0615 10:15:47.490552    2743 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca-key.pem (1679 bytes)
	I0615 10:15:47.490574    2743 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem (1078 bytes)
	I0615 10:15:47.490593    2743 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem (1123 bytes)
	I0615 10:15:47.490609    2743 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/key.pem (1679 bytes)
	I0615 10:15:47.490651    2743 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16718-868/.minikube/files/etc/ssl/certs/13132.pem (1708 bytes)
	I0615 10:15:47.490947    2743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/functional-822000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0615 10:15:47.498009    2743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/functional-822000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0615 10:15:47.504643    2743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/functional-822000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0615 10:15:47.511664    2743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/functional-822000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0615 10:15:47.518727    2743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0615 10:15:47.525779    2743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0615 10:15:47.534307    2743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0615 10:15:47.541457    2743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0615 10:15:47.547987    2743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/files/etc/ssl/certs/13132.pem --> /usr/share/ca-certificates/13132.pem (1708 bytes)
	I0615 10:15:47.555320    2743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0615 10:15:47.562177    2743 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/certs/1313.pem --> /usr/share/ca-certificates/1313.pem (1338 bytes)
	I0615 10:15:47.568956    2743 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0615 10:15:47.574552    2743 ssh_runner.go:195] Run: openssl version
	I0615 10:15:47.576313    2743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13132.pem && ln -fs /usr/share/ca-certificates/13132.pem /etc/ssl/certs/13132.pem"
	I0615 10:15:47.579734    2743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13132.pem
	I0615 10:15:47.581201    2743 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 15 17:14 /usr/share/ca-certificates/13132.pem
	I0615 10:15:47.581216    2743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13132.pem
	I0615 10:15:47.582944    2743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13132.pem /etc/ssl/certs/3ec20f2e.0"
	I0615 10:15:47.585699    2743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0615 10:15:47.588746    2743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0615 10:15:47.590201    2743 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 15 16:33 /usr/share/ca-certificates/minikubeCA.pem
	I0615 10:15:47.590220    2743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0615 10:15:47.591864    2743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0615 10:15:47.594928    2743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1313.pem && ln -fs /usr/share/ca-certificates/1313.pem /etc/ssl/certs/1313.pem"
	I0615 10:15:47.597948    2743 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1313.pem
	I0615 10:15:47.599380    2743 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 15 17:14 /usr/share/ca-certificates/1313.pem
	I0615 10:15:47.599400    2743 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1313.pem
	I0615 10:15:47.601435    2743 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1313.pem /etc/ssl/certs/51391683.0"
	I0615 10:15:47.604243    2743 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0615 10:15:47.605721    2743 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0615 10:15:47.607528    2743 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0615 10:15:47.609248    2743 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0615 10:15:47.610988    2743 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0615 10:15:47.612961    2743 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0615 10:15:47.614651    2743 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0615 10:15:47.616362    2743 kubeadm.go:404] StartCluster: {Name:functional-822000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.27.3 ClusterName:functional-822000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:15:47.616425    2743 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0615 10:15:47.621989    2743 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0615 10:15:47.624805    2743 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0615 10:15:47.624813    2743 kubeadm.go:636] restartCluster start
	I0615 10:15:47.624834    2743 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0615 10:15:47.627717    2743 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0615 10:15:47.628019    2743 kubeconfig.go:92] found "functional-822000" server: "https://192.168.105.4:8441"
	I0615 10:15:47.628751    2743 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0615 10:15:47.631662    2743 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0615 10:15:47.631666    2743 kubeadm.go:1128] stopping kube-system containers ...
	I0615 10:15:47.631703    2743 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0615 10:15:47.638310    2743 docker.go:462] Stopping containers: [1ccd31818e92 bfbb10a9aefc 1d70aeffeb65 6f7feb66c017 a4896e7af7e6 fff24f32d627 28fb6dd78b2e 54cf36e24f0c 5d8df16d3789 c28566a7fd2b 1997dc10cb33 2c3b8277bede d4e6f61f7469 b07f7c7badab 9003cf33d788 9b47e42b531e ca1005be0a50 d7908de265fd 203bef3f86fe 2cf9c0628b24 56c5ab0df83b ef1d147f7097 6eeca103e6f4 9cdeec1f2706 80ad96d55c57 a64c2678d6d7]
	I0615 10:15:47.638362    2743 ssh_runner.go:195] Run: docker stop 1ccd31818e92 bfbb10a9aefc 1d70aeffeb65 6f7feb66c017 a4896e7af7e6 fff24f32d627 28fb6dd78b2e 54cf36e24f0c 5d8df16d3789 c28566a7fd2b 1997dc10cb33 2c3b8277bede d4e6f61f7469 b07f7c7badab 9003cf33d788 9b47e42b531e ca1005be0a50 d7908de265fd 203bef3f86fe 2cf9c0628b24 56c5ab0df83b ef1d147f7097 6eeca103e6f4 9cdeec1f2706 80ad96d55c57 a64c2678d6d7
	I0615 10:15:47.645146    2743 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0615 10:15:47.724482    2743 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0615 10:15:47.728260    2743 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun 15 17:14 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Jun 15 17:14 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jun 15 17:14 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Jun 15 17:14 /etc/kubernetes/scheduler.conf
	
	I0615 10:15:47.728286    2743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0615 10:15:47.731421    2743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0615 10:15:47.734850    2743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0615 10:15:47.738076    2743 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0615 10:15:47.738099    2743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0615 10:15:47.740942    2743 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0615 10:15:47.743790    2743 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0615 10:15:47.743807    2743 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0615 10:15:47.746897    2743 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0615 10:15:47.749671    2743 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0615 10:15:47.749674    2743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0615 10:15:47.771379    2743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0615 10:15:48.407645    2743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0615 10:15:48.510442    2743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0615 10:15:48.536301    2743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0615 10:15:48.565266    2743 api_server.go:52] waiting for apiserver process to appear ...
	I0615 10:15:48.565325    2743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0615 10:15:49.078819    2743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0615 10:15:49.578785    2743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0615 10:15:49.583342    2743 api_server.go:72] duration metric: took 1.018080166s to wait for apiserver process to appear ...
	I0615 10:15:49.583347    2743 api_server.go:88] waiting for apiserver healthz status ...
	I0615 10:15:49.583354    2743 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0615 10:15:51.846748    2743 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0615 10:15:51.846757    2743 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0615 10:15:52.348918    2743 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0615 10:15:52.363313    2743 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0615 10:15:52.363343    2743 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0615 10:15:52.848831    2743 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0615 10:15:52.852489    2743 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0615 10:15:52.852497    2743 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0615 10:15:53.348806    2743 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0615 10:15:53.352231    2743 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0615 10:15:53.357688    2743 api_server.go:141] control plane version: v1.27.3
	I0615 10:15:53.357693    2743 api_server.go:131] duration metric: took 3.77435125s to wait for apiserver health ...
	I0615 10:15:53.357697    2743 cni.go:84] Creating CNI manager for ""
	I0615 10:15:53.357701    2743 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 10:15:53.361967    2743 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0615 10:15:53.363612    2743 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0615 10:15:53.366728    2743 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0615 10:15:53.371448    2743 system_pods.go:43] waiting for kube-system pods to appear ...
	I0615 10:15:53.375849    2743 system_pods.go:59] 7 kube-system pods found
	I0615 10:15:53.375857    2743 system_pods.go:61] "coredns-5d78c9869d-2mb86" [ee5ac03b-5bdb-4b04-94a2-c470b98363c4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0615 10:15:53.375861    2743 system_pods.go:61] "etcd-functional-822000" [94131fa2-ca44-45a9-85f2-a465f2cdae76] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0615 10:15:53.375864    2743 system_pods.go:61] "kube-apiserver-functional-822000" [a69fad9c-3650-4d9a-b45c-04adf05832e9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0615 10:15:53.375866    2743 system_pods.go:61] "kube-controller-manager-functional-822000" [3cb2c21a-e22d-4c36-a18e-fb87c75eecb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0615 10:15:53.375869    2743 system_pods.go:61] "kube-proxy-4f266" [44fcf088-830e-4809-831c-a9a4e9cba8da] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0615 10:15:53.375871    2743 system_pods.go:61] "kube-scheduler-functional-822000" [18d5dc77-45f6-4fe1-a42d-d63fe8cc31d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0615 10:15:53.375873    2743 system_pods.go:61] "storage-provisioner" [1a8f47b4-1068-49ad-9337-22d29b70613c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0615 10:15:53.375875    2743 system_pods.go:74] duration metric: took 4.425292ms to wait for pod list to return data ...
	I0615 10:15:53.375877    2743 node_conditions.go:102] verifying NodePressure condition ...
	I0615 10:15:53.377442    2743 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0615 10:15:53.377449    2743 node_conditions.go:123] node cpu capacity is 2
	I0615 10:15:53.377453    2743 node_conditions.go:105] duration metric: took 1.574458ms to run NodePressure ...
	I0615 10:15:53.377458    2743 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0615 10:15:53.443553    2743 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0615 10:15:53.445791    2743 kubeadm.go:787] kubelet initialised
	I0615 10:15:53.445794    2743 kubeadm.go:788] duration metric: took 2.23525ms waiting for restarted kubelet to initialise ...
	I0615 10:15:53.445797    2743 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0615 10:15:53.448503    2743 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-2mb86" in "kube-system" namespace to be "Ready" ...
	I0615 10:15:55.468268    2743 pod_ready.go:102] pod "coredns-5d78c9869d-2mb86" in "kube-system" namespace has status "Ready":"False"
	I0615 10:15:57.466945    2743 pod_ready.go:92] pod "coredns-5d78c9869d-2mb86" in "kube-system" namespace has status "Ready":"True"
	I0615 10:15:57.466973    2743 pod_ready.go:81] duration metric: took 4.0184665s waiting for pod "coredns-5d78c9869d-2mb86" in "kube-system" namespace to be "Ready" ...
	I0615 10:15:57.466992    2743 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-822000" in "kube-system" namespace to be "Ready" ...
	I0615 10:15:59.491192    2743 pod_ready.go:102] pod "etcd-functional-822000" in "kube-system" namespace has status "Ready":"False"
	I0615 10:16:01.994445    2743 pod_ready.go:102] pod "etcd-functional-822000" in "kube-system" namespace has status "Ready":"False"
	I0615 10:16:04.480083    2743 pod_ready.go:102] pod "etcd-functional-822000" in "kube-system" namespace has status "Ready":"False"
	I0615 10:16:06.483555    2743 pod_ready.go:102] pod "etcd-functional-822000" in "kube-system" namespace has status "Ready":"False"
	I0615 10:16:07.983389    2743 pod_ready.go:92] pod "etcd-functional-822000" in "kube-system" namespace has status "Ready":"True"
	I0615 10:16:07.983398    2743 pod_ready.go:81] duration metric: took 10.51642025s waiting for pod "etcd-functional-822000" in "kube-system" namespace to be "Ready" ...
	I0615 10:16:07.983409    2743 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-822000" in "kube-system" namespace to be "Ready" ...
	I0615 10:16:07.988432    2743 pod_ready.go:92] pod "kube-apiserver-functional-822000" in "kube-system" namespace has status "Ready":"True"
	I0615 10:16:07.988436    2743 pod_ready.go:81] duration metric: took 5.022542ms waiting for pod "kube-apiserver-functional-822000" in "kube-system" namespace to be "Ready" ...
	I0615 10:16:07.988442    2743 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-822000" in "kube-system" namespace to be "Ready" ...
	I0615 10:16:07.992742    2743 pod_ready.go:92] pod "kube-controller-manager-functional-822000" in "kube-system" namespace has status "Ready":"True"
	I0615 10:16:07.992747    2743 pod_ready.go:81] duration metric: took 4.301334ms waiting for pod "kube-controller-manager-functional-822000" in "kube-system" namespace to be "Ready" ...
	I0615 10:16:07.992753    2743 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4f266" in "kube-system" namespace to be "Ready" ...
	I0615 10:16:07.997086    2743 pod_ready.go:92] pod "kube-proxy-4f266" in "kube-system" namespace has status "Ready":"True"
	I0615 10:16:07.997090    2743 pod_ready.go:81] duration metric: took 4.333667ms waiting for pod "kube-proxy-4f266" in "kube-system" namespace to be "Ready" ...
	I0615 10:16:07.997094    2743 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-822000" in "kube-system" namespace to be "Ready" ...
	I0615 10:16:08.000945    2743 pod_ready.go:92] pod "kube-scheduler-functional-822000" in "kube-system" namespace has status "Ready":"True"
	I0615 10:16:08.000948    2743 pod_ready.go:81] duration metric: took 3.849834ms waiting for pod "kube-scheduler-functional-822000" in "kube-system" namespace to be "Ready" ...
	I0615 10:16:08.000953    2743 pod_ready.go:38] duration metric: took 14.555180542s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0615 10:16:08.000967    2743 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0615 10:16:08.007092    2743 ops.go:34] apiserver oom_adj: -16
	I0615 10:16:08.007097    2743 kubeadm.go:640] restartCluster took 20.382321333s
	I0615 10:16:08.007101    2743 kubeadm.go:406] StartCluster complete in 20.390779834s
	I0615 10:16:08.007112    2743 settings.go:142] acquiring lock: {Name:mk45a698fcd8dd8ae6984c9cf4ad4d183fdb5424 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:16:08.007252    2743 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:16:08.007748    2743 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/kubeconfig: {Name:mkbe9cac04fb467055323f2e3d5db2c6ddc287ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:16:08.008765    2743 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0615 10:16:08.008779    2743 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0615 10:16:08.008826    2743 addons.go:66] Setting storage-provisioner=true in profile "functional-822000"
	I0615 10:16:08.008835    2743 addons.go:228] Setting addon storage-provisioner=true in "functional-822000"
	W0615 10:16:08.008838    2743 addons.go:237] addon storage-provisioner should already be in state true
	I0615 10:16:08.008838    2743 addons.go:66] Setting default-storageclass=true in profile "functional-822000"
	I0615 10:16:08.008849    2743 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-822000"
	I0615 10:16:08.008875    2743 host.go:66] Checking if "functional-822000" exists ...
	I0615 10:16:08.008938    2743 config.go:182] Loaded profile config "functional-822000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:16:08.016051    2743 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0615 10:16:08.020021    2743 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0615 10:16:08.020025    2743 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0615 10:16:08.020034    2743 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/functional-822000/id_rsa Username:docker}
	I0615 10:16:08.020543    2743 kapi.go:248] "coredns" deployment in "kube-system" namespace and "functional-822000" context rescaled to 1 replicas
	I0615 10:16:08.020554    2743 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:16:08.024966    2743 out.go:177] * Verifying Kubernetes components...
	I0615 10:16:08.022808    2743 addons.go:228] Setting addon default-storageclass=true in "functional-822000"
	W0615 10:16:08.032975    2743 addons.go:237] addon default-storageclass should already be in state true
	I0615 10:16:08.032990    2743 host.go:66] Checking if "functional-822000" exists ...
	I0615 10:16:08.033020    2743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0615 10:16:08.033748    2743 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0615 10:16:08.033750    2743 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0615 10:16:08.033755    2743 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/functional-822000/id_rsa Username:docker}
	I0615 10:16:08.057993    2743 node_ready.go:35] waiting up to 6m0s for node "functional-822000" to be "Ready" ...
	I0615 10:16:08.058026    2743 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0615 10:16:08.059731    2743 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0615 10:16:08.067277    2743 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0615 10:16:08.180848    2743 node_ready.go:49] node "functional-822000" has status "Ready":"True"
	I0615 10:16:08.180853    2743 node_ready.go:38] duration metric: took 122.850625ms waiting for node "functional-822000" to be "Ready" ...
	I0615 10:16:08.180855    2743 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0615 10:16:08.382684    2743 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-2mb86" in "kube-system" namespace to be "Ready" ...
	I0615 10:16:08.431167    2743 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0615 10:16:08.439216    2743 addons.go:499] enable addons completed in 430.438584ms: enabled=[storage-provisioner default-storageclass]
	I0615 10:16:08.782448    2743 pod_ready.go:92] pod "coredns-5d78c9869d-2mb86" in "kube-system" namespace has status "Ready":"True"
	I0615 10:16:08.782457    2743 pod_ready.go:81] duration metric: took 399.767333ms waiting for pod "coredns-5d78c9869d-2mb86" in "kube-system" namespace to be "Ready" ...
	I0615 10:16:08.782465    2743 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-822000" in "kube-system" namespace to be "Ready" ...
	I0615 10:16:09.186572    2743 pod_ready.go:92] pod "etcd-functional-822000" in "kube-system" namespace has status "Ready":"True"
	I0615 10:16:09.186597    2743 pod_ready.go:81] duration metric: took 404.123542ms waiting for pod "etcd-functional-822000" in "kube-system" namespace to be "Ready" ...
	I0615 10:16:09.186616    2743 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-822000" in "kube-system" namespace to be "Ready" ...
	I0615 10:16:09.585876    2743 pod_ready.go:92] pod "kube-apiserver-functional-822000" in "kube-system" namespace has status "Ready":"True"
	I0615 10:16:09.585904    2743 pod_ready.go:81] duration metric: took 399.275542ms waiting for pod "kube-apiserver-functional-822000" in "kube-system" namespace to be "Ready" ...
	I0615 10:16:09.585923    2743 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-822000" in "kube-system" namespace to be "Ready" ...
	I0615 10:16:09.982318    2743 pod_ready.go:92] pod "kube-controller-manager-functional-822000" in "kube-system" namespace has status "Ready":"True"
	I0615 10:16:09.982326    2743 pod_ready.go:81] duration metric: took 396.395ms waiting for pod "kube-controller-manager-functional-822000" in "kube-system" namespace to be "Ready" ...
	I0615 10:16:09.982332    2743 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-4f266" in "kube-system" namespace to be "Ready" ...
	I0615 10:16:10.386354    2743 pod_ready.go:92] pod "kube-proxy-4f266" in "kube-system" namespace has status "Ready":"True"
	I0615 10:16:10.386373    2743 pod_ready.go:81] duration metric: took 404.033625ms waiting for pod "kube-proxy-4f266" in "kube-system" namespace to be "Ready" ...
	I0615 10:16:10.386395    2743 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-822000" in "kube-system" namespace to be "Ready" ...
	I0615 10:16:10.787353    2743 pod_ready.go:92] pod "kube-scheduler-functional-822000" in "kube-system" namespace has status "Ready":"True"
	I0615 10:16:10.787380    2743 pod_ready.go:81] duration metric: took 400.972834ms waiting for pod "kube-scheduler-functional-822000" in "kube-system" namespace to be "Ready" ...
	I0615 10:16:10.787403    2743 pod_ready.go:38] duration metric: took 2.606542292s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0615 10:16:10.787442    2743 api_server.go:52] waiting for apiserver process to appear ...
	I0615 10:16:10.787726    2743 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0615 10:16:10.809605    2743 api_server.go:72] duration metric: took 2.789037583s to wait for apiserver process to appear ...
	I0615 10:16:10.809618    2743 api_server.go:88] waiting for apiserver healthz status ...
	I0615 10:16:10.809632    2743 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0615 10:16:10.818038    2743 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0615 10:16:10.819223    2743 api_server.go:141] control plane version: v1.27.3
	I0615 10:16:10.819232    2743 api_server.go:131] duration metric: took 9.608708ms to wait for apiserver health ...
	I0615 10:16:10.819237    2743 system_pods.go:43] waiting for kube-system pods to appear ...
	I0615 10:16:10.985184    2743 system_pods.go:59] 7 kube-system pods found
	I0615 10:16:10.985198    2743 system_pods.go:61] "coredns-5d78c9869d-2mb86" [ee5ac03b-5bdb-4b04-94a2-c470b98363c4] Running
	I0615 10:16:10.985203    2743 system_pods.go:61] "etcd-functional-822000" [94131fa2-ca44-45a9-85f2-a465f2cdae76] Running
	I0615 10:16:10.985207    2743 system_pods.go:61] "kube-apiserver-functional-822000" [a69fad9c-3650-4d9a-b45c-04adf05832e9] Running
	I0615 10:16:10.985214    2743 system_pods.go:61] "kube-controller-manager-functional-822000" [3cb2c21a-e22d-4c36-a18e-fb87c75eecb9] Running
	I0615 10:16:10.985218    2743 system_pods.go:61] "kube-proxy-4f266" [44fcf088-830e-4809-831c-a9a4e9cba8da] Running
	I0615 10:16:10.985221    2743 system_pods.go:61] "kube-scheduler-functional-822000" [18d5dc77-45f6-4fe1-a42d-d63fe8cc31d2] Running
	I0615 10:16:10.985225    2743 system_pods.go:61] "storage-provisioner" [1a8f47b4-1068-49ad-9337-22d29b70613c] Running
	I0615 10:16:10.985230    2743 system_pods.go:74] duration metric: took 165.99025ms to wait for pod list to return data ...
	I0615 10:16:10.985238    2743 default_sa.go:34] waiting for default service account to be created ...
	I0615 10:16:11.186251    2743 default_sa.go:45] found service account: "default"
	I0615 10:16:11.186275    2743 default_sa.go:55] duration metric: took 201.029667ms for default service account to be created ...
	I0615 10:16:11.186287    2743 system_pods.go:116] waiting for k8s-apps to be running ...
	I0615 10:16:11.391187    2743 system_pods.go:86] 7 kube-system pods found
	I0615 10:16:11.391210    2743 system_pods.go:89] "coredns-5d78c9869d-2mb86" [ee5ac03b-5bdb-4b04-94a2-c470b98363c4] Running
	I0615 10:16:11.391218    2743 system_pods.go:89] "etcd-functional-822000" [94131fa2-ca44-45a9-85f2-a465f2cdae76] Running
	I0615 10:16:11.391226    2743 system_pods.go:89] "kube-apiserver-functional-822000" [a69fad9c-3650-4d9a-b45c-04adf05832e9] Running
	I0615 10:16:11.391237    2743 system_pods.go:89] "kube-controller-manager-functional-822000" [3cb2c21a-e22d-4c36-a18e-fb87c75eecb9] Running
	I0615 10:16:11.391247    2743 system_pods.go:89] "kube-proxy-4f266" [44fcf088-830e-4809-831c-a9a4e9cba8da] Running
	I0615 10:16:11.391254    2743 system_pods.go:89] "kube-scheduler-functional-822000" [18d5dc77-45f6-4fe1-a42d-d63fe8cc31d2] Running
	I0615 10:16:11.391261    2743 system_pods.go:89] "storage-provisioner" [1a8f47b4-1068-49ad-9337-22d29b70613c] Running
	I0615 10:16:11.391273    2743 system_pods.go:126] duration metric: took 204.979417ms to wait for k8s-apps to be running ...
	I0615 10:16:11.391285    2743 system_svc.go:44] waiting for kubelet service to be running ....
	I0615 10:16:11.391498    2743 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0615 10:16:11.407942    2743 system_svc.go:56] duration metric: took 16.659ms WaitForService to wait for kubelet.
	I0615 10:16:11.407952    2743 kubeadm.go:581] duration metric: took 3.387392292s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0615 10:16:11.407972    2743 node_conditions.go:102] verifying NodePressure condition ...
	I0615 10:16:11.587333    2743 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0615 10:16:11.587364    2743 node_conditions.go:123] node cpu capacity is 2
	I0615 10:16:11.587392    2743 node_conditions.go:105] duration metric: took 179.413917ms to run NodePressure ...
	I0615 10:16:11.587419    2743 start.go:228] waiting for startup goroutines ...
	I0615 10:16:11.587436    2743 start.go:233] waiting for cluster config update ...
	I0615 10:16:11.587456    2743 start.go:242] writing updated cluster config ...
	I0615 10:16:11.588816    2743 ssh_runner.go:195] Run: rm -f paused
	I0615 10:16:11.653761    2743 start.go:582] kubectl: 1.25.9, cluster: 1.27.3 (minor skew: 2)
	I0615 10:16:11.659130    2743 out.go:177] 
	W0615 10:16:11.663175    2743 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.3.
	I0615 10:16:11.666204    2743 out.go:177]   - Want kubectl v1.27.3? Try 'minikube kubectl -- get pods -A'
	I0615 10:16:11.673201    2743 out.go:177] * Done! kubectl is now configured to use "functional-822000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-06-15 17:14:15 UTC, ends at Thu 2023-06-15 17:17:02 UTC. --
	Jun 15 17:16:51 functional-822000 dockerd[6562]: time="2023-06-15T17:16:51.103324600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 17:16:51 functional-822000 dockerd[6562]: time="2023-06-15T17:16:51.103351266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 15 17:16:51 functional-822000 dockerd[6562]: time="2023-06-15T17:16:51.103358474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 17:16:51 functional-822000 cri-dockerd[6835]: time="2023-06-15T17:16:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/52a3026be8d2b6ffa15a20f08c55c6a4663664d6afffaa78c3b561c7b6146548/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 15 17:16:53 functional-822000 cri-dockerd[6835]: time="2023-06-15T17:16:53Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Jun 15 17:16:53 functional-822000 dockerd[6562]: time="2023-06-15T17:16:53.660823183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 15 17:16:53 functional-822000 dockerd[6562]: time="2023-06-15T17:16:53.660860474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 17:16:53 functional-822000 dockerd[6562]: time="2023-06-15T17:16:53.660871474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 15 17:16:53 functional-822000 dockerd[6562]: time="2023-06-15T17:16:53.660877640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 17:16:53 functional-822000 dockerd[6556]: time="2023-06-15T17:16:53.712750315Z" level=info msg="ignoring event" container=dc6cb233a840f3d32c92fdee5080c4fdb6da9a181367ef3f647566e29e32b5a0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 15 17:16:53 functional-822000 dockerd[6562]: time="2023-06-15T17:16:53.712831771Z" level=info msg="shim disconnected" id=dc6cb233a840f3d32c92fdee5080c4fdb6da9a181367ef3f647566e29e32b5a0 namespace=moby
	Jun 15 17:16:53 functional-822000 dockerd[6562]: time="2023-06-15T17:16:53.712855520Z" level=warning msg="cleaning up after shim disconnected" id=dc6cb233a840f3d32c92fdee5080c4fdb6da9a181367ef3f647566e29e32b5a0 namespace=moby
	Jun 15 17:16:53 functional-822000 dockerd[6562]: time="2023-06-15T17:16:53.712860478Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 15 17:16:55 functional-822000 dockerd[6556]: time="2023-06-15T17:16:55.676256641Z" level=info msg="ignoring event" container=52a3026be8d2b6ffa15a20f08c55c6a4663664d6afffaa78c3b561c7b6146548 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 15 17:16:55 functional-822000 dockerd[6562]: time="2023-06-15T17:16:55.676547092Z" level=info msg="shim disconnected" id=52a3026be8d2b6ffa15a20f08c55c6a4663664d6afffaa78c3b561c7b6146548 namespace=moby
	Jun 15 17:16:55 functional-822000 dockerd[6562]: time="2023-06-15T17:16:55.676592591Z" level=warning msg="cleaning up after shim disconnected" id=52a3026be8d2b6ffa15a20f08c55c6a4663664d6afffaa78c3b561c7b6146548 namespace=moby
	Jun 15 17:16:55 functional-822000 dockerd[6562]: time="2023-06-15T17:16:55.676599341Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 15 17:16:58 functional-822000 dockerd[6562]: time="2023-06-15T17:16:58.628803308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 15 17:16:58 functional-822000 dockerd[6562]: time="2023-06-15T17:16:58.629329213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 17:16:58 functional-822000 dockerd[6562]: time="2023-06-15T17:16:58.629386670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 15 17:16:58 functional-822000 dockerd[6562]: time="2023-06-15T17:16:58.629424461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 17:16:58 functional-822000 dockerd[6562]: time="2023-06-15T17:16:58.671515768Z" level=info msg="shim disconnected" id=e14b7ceab190b9c7635bdb2028a741117d52015660950395ba7e7a222f933b6d namespace=moby
	Jun 15 17:16:58 functional-822000 dockerd[6562]: time="2023-06-15T17:16:58.671549642Z" level=warning msg="cleaning up after shim disconnected" id=e14b7ceab190b9c7635bdb2028a741117d52015660950395ba7e7a222f933b6d namespace=moby
	Jun 15 17:16:58 functional-822000 dockerd[6562]: time="2023-06-15T17:16:58.671554142Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 15 17:16:58 functional-822000 dockerd[6556]: time="2023-06-15T17:16:58.671727263Z" level=info msg="ignoring event" container=e14b7ceab190b9c7635bdb2028a741117d52015660950395ba7e7a222f933b6d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	e14b7ceab190b       72565bf5bbedf                                                                                         4 seconds ago        Exited              echoserver-arm            2                   e7e5dbe374778
	dc6cb233a840f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 seconds ago        Exited              mount-munger              0                   52a3026be8d2b
	b47a7bf7af32f       72565bf5bbedf                                                                                         14 seconds ago       Exited              echoserver-arm            2                   313b6e31fd15b
	065e2814cde84       nginx@sha256:593dac25b7733ffb7afe1a72649a43e574778bf025ad60514ef40f6b5d606247                         26 seconds ago       Running             myfrontend                0                   e04fd747696ca
	207aa9ef495aa       nginx@sha256:9b0582aaf2b2d6ffc2451630c28cb2b0019905f1bee8a38add596b4904522381                         41 seconds ago       Running             nginx                     0                   880ecbeb98b6a
	1594619867b3a       97e04611ad434                                                                                         About a minute ago   Running             coredns                   2                   864ea53d5f722
	ceb47dd9032eb       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       1                   04b17504cf12d
	a5b71dc327bc7       fb73e92641fd5                                                                                         About a minute ago   Running             kube-proxy                2                   fb9c5993bcffc
	2aa12046c8e70       39dfb036b0986                                                                                         About a minute ago   Running             kube-apiserver            0                   acbb08555cfca
	23c197253fd67       bcb9e554eaab6                                                                                         About a minute ago   Running             kube-scheduler            2                   88998c7e1ef4a
	d141f912ccc7a       24bc64e911039                                                                                         About a minute ago   Running             etcd                      2                   4baf8f835e968
	45c0f1fb80a42       ab3683b584ae5                                                                                         About a minute ago   Running             kube-controller-manager   2                   027244920b0ae
	1ccd31818e928       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       0                   bfbb10a9aefc3
	1d70aeffeb658       fb73e92641fd5                                                                                         About a minute ago   Exited              kube-proxy                1                   d4e6f61f7469a
	6f7feb66c0173       bcb9e554eaab6                                                                                         About a minute ago   Exited              kube-scheduler            1                   2c3b8277bedeb
	a4896e7af7e65       24bc64e911039                                                                                         About a minute ago   Exited              etcd                      1                   c28566a7fd2b1
	fff24f32d627f       ab3683b584ae5                                                                                         About a minute ago   Exited              kube-controller-manager   1                   1997dc10cb33d
	54cf36e24f0c6       97e04611ad434                                                                                         About a minute ago   Exited              coredns                   1                   b07f7c7badab1
	
	* 
	* ==> coredns [1594619867b3] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:33416 - 39898 "HINFO IN 5899221188193476495.4853030803859181197. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.00425471s
	[INFO] 10.244.0.1:56251 - 29215 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000102951s
	[INFO] 10.244.0.1:33125 - 22513 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000089661s
	[INFO] 10.244.0.1:46319 - 52343 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000041205s
	[INFO] 10.244.0.1:56936 - 11069 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001115759s
	[INFO] 10.244.0.1:1584 - 10368 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000070787s
	[INFO] 10.244.0.1:9686 - 44153 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000090869s
	
	* 
	* ==> coredns [54cf36e24f0c] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57631 - 22102 "HINFO IN 8743971270733654336.871467733623515682. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.004729379s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-822000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-822000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1814abc5e40384accb8747bfb7e33027343c9627
	                    minikube.k8s.io/name=functional-822000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_15T10_14_32_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Jun 2023 17:14:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-822000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Jun 2023 17:16:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Jun 2023 17:16:53 +0000   Thu, 15 Jun 2023 17:14:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Jun 2023 17:16:53 +0000   Thu, 15 Jun 2023 17:14:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Jun 2023 17:16:53 +0000   Thu, 15 Jun 2023 17:14:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Jun 2023 17:16:53 +0000   Thu, 15 Jun 2023 17:14:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-822000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 5b4dd0ab949c4be6b78e472e724c783f
	  System UUID:                5b4dd0ab949c4be6b78e472e724c783f
	  Boot ID:                    80cba07a-2f98-47cd-8f1c-1335bd661c41
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-7b684b55f9-hj7g8                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	  default                     hello-node-connect-58d66798bb-69rxt          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         34s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kube-system                 coredns-5d78c9869d-2mb86                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m17s
	  kube-system                 etcd-functional-822000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m30s
	  kube-system                 kube-apiserver-functional-822000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	  kube-system                 kube-controller-manager-functional-822000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m32s
	  kube-system                 kube-proxy-4f266                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m17s
	  kube-system                 kube-scheduler-functional-822000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         113s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m16s                  kube-proxy       
	  Normal  Starting                 69s                    kube-proxy       
	  Normal  Starting                 115s                   kube-proxy       
	  Normal  Starting                 2m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m35s (x8 over 2m35s)  kubelet          Node functional-822000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m35s (x8 over 2m35s)  kubelet          Node functional-822000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m35s (x7 over 2m35s)  kubelet          Node functional-822000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m30s                  kubelet          Node functional-822000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m30s                  kubelet          Node functional-822000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m30s                  kubelet          Node functional-822000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m30s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m27s                  kubelet          Node functional-822000 status is now: NodeReady
	  Normal  RegisteredNode           2m18s                  node-controller  Node functional-822000 event: Registered Node functional-822000 in Controller
	  Normal  NodeNotReady             2m9s                   kubelet          Node functional-822000 status is now: NodeNotReady
	  Normal  RegisteredNode           102s                   node-controller  Node functional-822000 event: Registered Node functional-822000 in Controller
	  Normal  Starting                 74s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  74s (x8 over 74s)      kubelet          Node functional-822000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    74s (x8 over 74s)      kubelet          Node functional-822000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     74s (x7 over 74s)      kubelet          Node functional-822000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  74s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           58s                    node-controller  Node functional-822000 event: Registered Node functional-822000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.084368] systemd-fstab-generator[3676]: Ignoring "noauto" for root device
	[  +0.087522] systemd-fstab-generator[3689]: Ignoring "noauto" for root device
	[  +3.664767] kauditd_printk_skb: 28 callbacks suppressed
	[  +8.046144] systemd-fstab-generator[4262]: Ignoring "noauto" for root device
	[  +0.066883] systemd-fstab-generator[4273]: Ignoring "noauto" for root device
	[  +0.087554] systemd-fstab-generator[4332]: Ignoring "noauto" for root device
	[  +0.074331] systemd-fstab-generator[4358]: Ignoring "noauto" for root device
	[  +0.101763] systemd-fstab-generator[4469]: Ignoring "noauto" for root device
	[Jun15 17:15] kauditd_printk_skb: 34 callbacks suppressed
	[ +27.445259] systemd-fstab-generator[6096]: Ignoring "noauto" for root device
	[  +0.134823] systemd-fstab-generator[6130]: Ignoring "noauto" for root device
	[  +0.085085] systemd-fstab-generator[6141]: Ignoring "noauto" for root device
	[  +0.087412] systemd-fstab-generator[6154]: Ignoring "noauto" for root device
	[ +11.379128] systemd-fstab-generator[6719]: Ignoring "noauto" for root device
	[  +0.060479] systemd-fstab-generator[6730]: Ignoring "noauto" for root device
	[  +0.065137] systemd-fstab-generator[6741]: Ignoring "noauto" for root device
	[  +0.062567] systemd-fstab-generator[6752]: Ignoring "noauto" for root device
	[  +0.095030] systemd-fstab-generator[6828]: Ignoring "noauto" for root device
	[  +1.125432] systemd-fstab-generator[7083]: Ignoring "noauto" for root device
	[  +4.580493] kauditd_printk_skb: 29 callbacks suppressed
	[Jun15 17:16] kauditd_printk_skb: 16 callbacks suppressed
	[ +10.434848] kauditd_printk_skb: 1 callbacks suppressed
	[  +1.736080] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[ +11.750200] kauditd_printk_skb: 1 callbacks suppressed
	[  +8.716146] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [a4896e7af7e6] <==
	* {"level":"info","ts":"2023-06-15T17:15:05.383Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-06-15T17:15:05.384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2023-06-15T17:15:05.384Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-06-15T17:15:05.384Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-15T17:15:05.384Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-15T17:15:07.056Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2023-06-15T17:15:07.057Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-06-15T17:15:07.057Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2023-06-15T17:15:07.057Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2023-06-15T17:15:07.057Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-06-15T17:15:07.057Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2023-06-15T17:15:07.057Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-06-15T17:15:07.060Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-822000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-15T17:15:07.060Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-15T17:15:07.061Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-15T17:15:07.061Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-15T17:15:07.061Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-15T17:15:07.063Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-15T17:15:07.064Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-06-15T17:15:35.733Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-06-15T17:15:35.733Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"functional-822000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"info","ts":"2023-06-15T17:15:35.756Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2023-06-15T17:15:35.758Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-06-15T17:15:35.760Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-06-15T17:15:35.760Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"functional-822000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	* 
	* ==> etcd [d141f912ccc7] <==
	* {"level":"info","ts":"2023-06-15T17:15:49.599Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-15T17:15:49.599Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-15T17:15:49.599Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2023-06-15T17:15:49.599Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-06-15T17:15:49.599Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-15T17:15:49.599Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-15T17:15:49.601Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-06-15T17:15:49.605Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-06-15T17:15:49.606Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-06-15T17:15:49.606Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-06-15T17:15:49.606Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-06-15T17:15:51.161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2023-06-15T17:15:51.161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-06-15T17:15:51.162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-06-15T17:15:51.162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2023-06-15T17:15:51.162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-06-15T17:15:51.162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2023-06-15T17:15:51.162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-06-15T17:15:51.165Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-822000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-15T17:15:51.165Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-15T17:15:51.165Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-15T17:15:51.165Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-15T17:15:51.165Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-15T17:15:51.168Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-15T17:15:51.168Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.4:2379"}
	
	* 
	* ==> kernel <==
	*  17:17:02 up 2 min,  0 users,  load average: 0.62, 0.28, 0.11
	Linux functional-822000 5.10.57 #1 SMP PREEMPT Wed Jun 14 05:08:37 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [2aa12046c8e7] <==
	* I0615 17:15:51.894665       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0615 17:15:51.894691       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0615 17:15:51.894742       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0615 17:15:51.894913       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0615 17:15:51.895406       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0615 17:15:51.896717       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0615 17:15:51.896729       1 aggregator.go:152] initial CRD sync complete...
	I0615 17:15:51.896733       1 autoregister_controller.go:141] Starting autoregister controller
	I0615 17:15:51.896763       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0615 17:15:51.896770       1 cache.go:39] Caches are synced for autoregister controller
	I0615 17:15:51.955359       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0615 17:15:52.666625       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0615 17:15:52.797216       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0615 17:15:53.395168       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0615 17:15:53.398357       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0615 17:15:53.410892       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0615 17:15:53.420777       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0615 17:15:53.423174       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0615 17:16:04.406192       1 controller.go:624] quota admission added evaluator for: endpoints
	I0615 17:16:04.557644       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0615 17:16:13.110365       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs=map[IPv4:10.111.146.72]
	I0615 17:16:17.566935       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.99.119.161]
	I0615 17:16:28.970064       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0615 17:16:29.012840       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.104.130.152]
	I0615 17:16:42.497998       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.102.183.98]
	
	* 
	* ==> kube-controller-manager [45c0f1fb80a4] <==
	* I0615 17:16:04.414511       1 shared_informer.go:318] Caches are synced for daemon sets
	I0615 17:16:04.416608       1 shared_informer.go:318] Caches are synced for cronjob
	I0615 17:16:04.416670       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0615 17:16:04.417720       1 shared_informer.go:318] Caches are synced for taint
	I0615 17:16:04.417792       1 node_lifecycle_controller.go:1223] "Initializing eviction metric for zone" zone=""
	I0615 17:16:04.417848       1 node_lifecycle_controller.go:875] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-822000"
	I0615 17:16:04.417884       1 node_lifecycle_controller.go:1069] "Controller detected that zone is now in new state" zone="" newState=Normal
	I0615 17:16:04.417799       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0615 17:16:04.417923       1 taint_manager.go:211] "Sending events to api server"
	I0615 17:16:04.417999       1 event.go:307] "Event occurred" object="functional-822000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node functional-822000 event: Registered Node functional-822000 in Controller"
	I0615 17:16:04.437563       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0615 17:16:04.453491       1 shared_informer.go:318] Caches are synced for crt configmap
	I0615 17:16:04.498147       1 shared_informer.go:318] Caches are synced for resource quota
	I0615 17:16:04.507833       1 shared_informer.go:318] Caches are synced for resource quota
	I0615 17:16:04.553198       1 shared_informer.go:318] Caches are synced for service account
	I0615 17:16:04.601469       1 shared_informer.go:318] Caches are synced for namespace
	I0615 17:16:04.932038       1 shared_informer.go:318] Caches are synced for garbage collector
	I0615 17:16:04.942651       1 shared_informer.go:318] Caches are synced for garbage collector
	I0615 17:16:04.942666       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0615 17:16:22.460225       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0615 17:16:22.460317       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0615 17:16:28.972064       1 event.go:307] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-58d66798bb to 1"
	I0615 17:16:28.980567       1 event.go:307] "Event occurred" object="default/hello-node-connect-58d66798bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-58d66798bb-69rxt"
	I0615 17:16:42.456947       1 event.go:307] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-7b684b55f9 to 1"
	I0615 17:16:42.461988       1 event.go:307] "Event occurred" object="default/hello-node-7b684b55f9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-7b684b55f9-hj7g8"
	
	* 
	* ==> kube-controller-manager [fff24f32d627] <==
	* I0615 17:15:20.666201       1 shared_informer.go:318] Caches are synced for daemon sets
	I0615 17:15:20.669371       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0615 17:15:20.669414       1 shared_informer.go:318] Caches are synced for PV protection
	I0615 17:15:20.670674       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0615 17:15:20.684341       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0615 17:15:20.685393       1 shared_informer.go:318] Caches are synced for TTL
	I0615 17:15:20.685427       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0615 17:15:20.686533       1 shared_informer.go:318] Caches are synced for disruption
	I0615 17:15:20.691779       1 shared_informer.go:318] Caches are synced for crt configmap
	I0615 17:15:20.691829       1 shared_informer.go:318] Caches are synced for node
	I0615 17:15:20.691915       1 range_allocator.go:174] "Sending events to api server"
	I0615 17:15:20.691934       1 shared_informer.go:318] Caches are synced for GC
	I0615 17:15:20.691955       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0615 17:15:20.691957       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0615 17:15:20.691975       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0615 17:15:20.693302       1 shared_informer.go:318] Caches are synced for cronjob
	I0615 17:15:20.759485       1 shared_informer.go:318] Caches are synced for persistent volume
	I0615 17:15:20.835948       1 shared_informer.go:318] Caches are synced for attach detach
	I0615 17:15:20.849699       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0615 17:15:20.858865       1 shared_informer.go:318] Caches are synced for resource quota
	I0615 17:15:20.871580       1 shared_informer.go:318] Caches are synced for endpoint
	I0615 17:15:20.895755       1 shared_informer.go:318] Caches are synced for resource quota
	I0615 17:15:21.216610       1 shared_informer.go:318] Caches are synced for garbage collector
	I0615 17:15:21.244243       1 shared_informer.go:318] Caches are synced for garbage collector
	I0615 17:15:21.244357       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [1d70aeffeb65] <==
	* I0615 17:15:07.779635       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0615 17:15:07.779693       1 server_others.go:110] "Detected node IP" address="192.168.105.4"
	I0615 17:15:07.779709       1 server_others.go:554] "Using iptables proxy"
	I0615 17:15:07.801752       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0615 17:15:07.801764       1 server_others.go:192] "Using iptables Proxier"
	I0615 17:15:07.801780       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0615 17:15:07.801965       1 server.go:658] "Version info" version="v1.27.3"
	I0615 17:15:07.801969       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0615 17:15:07.802541       1 config.go:188] "Starting service config controller"
	I0615 17:15:07.802570       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0615 17:15:07.802594       1 config.go:97] "Starting endpoint slice config controller"
	I0615 17:15:07.802607       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0615 17:15:07.802838       1 config.go:315] "Starting node config controller"
	I0615 17:15:07.803115       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0615 17:15:07.905897       1 shared_informer.go:318] Caches are synced for service config
	I0615 17:15:07.905895       1 shared_informer.go:318] Caches are synced for node config
	I0615 17:15:07.905960       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [a5b71dc327bc] <==
	* I0615 17:15:53.192538       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0615 17:15:53.192564       1 server_others.go:110] "Detected node IP" address="192.168.105.4"
	I0615 17:15:53.192573       1 server_others.go:554] "Using iptables proxy"
	I0615 17:15:53.199954       1 server_others.go:178] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0615 17:15:53.199964       1 server_others.go:192] "Using iptables Proxier"
	I0615 17:15:53.199976       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0615 17:15:53.200122       1 server.go:658] "Version info" version="v1.27.3"
	I0615 17:15:53.200125       1 server.go:660] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0615 17:15:53.200417       1 config.go:188] "Starting service config controller"
	I0615 17:15:53.200424       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0615 17:15:53.200432       1 config.go:97] "Starting endpoint slice config controller"
	I0615 17:15:53.200434       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0615 17:15:53.200609       1 config.go:315] "Starting node config controller"
	I0615 17:15:53.200611       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0615 17:15:53.301420       1 shared_informer.go:318] Caches are synced for node config
	I0615 17:15:53.301431       1 shared_informer.go:318] Caches are synced for service config
	I0615 17:15:53.301439       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [23c197253fd6] <==
	* I0615 17:15:49.828591       1 serving.go:348] Generated self-signed cert in-memory
	W0615 17:15:51.850352       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0615 17:15:51.850372       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0615 17:15:51.850377       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0615 17:15:51.850380       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0615 17:15:51.859554       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0615 17:15:51.859630       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0615 17:15:51.860656       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0615 17:15:51.860763       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0615 17:15:51.860786       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0615 17:15:51.860809       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0615 17:15:51.961557       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [6f7feb66c017] <==
	* I0615 17:15:05.781750       1 serving.go:348] Generated self-signed cert in-memory
	W0615 17:15:07.741866       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0615 17:15:07.741921       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0615 17:15:07.741954       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0615 17:15:07.741971       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0615 17:15:07.766595       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.3"
	I0615 17:15:07.766705       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0615 17:15:07.769143       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0615 17:15:07.769193       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0615 17:15:07.769263       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0615 17:15:07.769298       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0615 17:15:07.869351       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0615 17:15:35.730076       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0615 17:15:35.730094       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0615 17:15:35.730133       1 scheduling_queue.go:1135] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0615 17:15:35.730145       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-06-15 17:14:15 UTC, ends at Thu 2023-06-15 17:17:03 UTC. --
	Jun 15 17:16:48 functional-822000 kubelet[7089]: I0615 17:16:48.551746    7089 scope.go:115] "RemoveContainer" containerID="d59724f8441d59dbde7204d2702ad2216f1560edbd7b377ef4e0366b033ee171"
	Jun 15 17:16:48 functional-822000 kubelet[7089]: E0615 17:16:48.559104    7089 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 15 17:16:48 functional-822000 kubelet[7089]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 15 17:16:48 functional-822000 kubelet[7089]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 15 17:16:48 functional-822000 kubelet[7089]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 15 17:16:48 functional-822000 kubelet[7089]: I0615 17:16:48.624561    7089 scope.go:115] "RemoveContainer" containerID="28fb6dd78b2e776d216ff8a899fff75131bc13fd538056673d8a61f0fdba82f4"
	Jun 15 17:16:48 functional-822000 kubelet[7089]: I0615 17:16:48.642945    7089 scope.go:115] "RemoveContainer" containerID="d59724f8441d59dbde7204d2702ad2216f1560edbd7b377ef4e0366b033ee171"
	Jun 15 17:16:49 functional-822000 kubelet[7089]: I0615 17:16:49.488023    7089 scope.go:115] "RemoveContainer" containerID="b47a7bf7af32f2c05be43a14b7c475b9fbe761061001bc0482a4f3322360cfb4"
	Jun 15 17:16:49 functional-822000 kubelet[7089]: E0615 17:16:49.488118    7089 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-58d66798bb-69rxt_default(2df51cd2-c003-4e7a-aee9-ae1934b81b32)\"" pod="default/hello-node-connect-58d66798bb-69rxt" podUID=2df51cd2-c003-4e7a-aee9-ae1934b81b32
	Jun 15 17:16:50 functional-822000 kubelet[7089]: I0615 17:16:50.753073    7089 topology_manager.go:212] "Topology Admit Handler"
	Jun 15 17:16:50 functional-822000 kubelet[7089]: I0615 17:16:50.876402    7089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/a2ca6f4d-715a-4df7-a5fd-ee8974f37277-test-volume\") pod \"busybox-mount\" (UID: \"a2ca6f4d-715a-4df7-a5fd-ee8974f37277\") " pod="default/busybox-mount"
	Jun 15 17:16:50 functional-822000 kubelet[7089]: I0615 17:16:50.876471    7089 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d6vb\" (UniqueName: \"kubernetes.io/projected/a2ca6f4d-715a-4df7-a5fd-ee8974f37277-kube-api-access-5d6vb\") pod \"busybox-mount\" (UID: \"a2ca6f4d-715a-4df7-a5fd-ee8974f37277\") " pod="default/busybox-mount"
	Jun 15 17:16:55 functional-822000 kubelet[7089]: I0615 17:16:55.828840    7089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/a2ca6f4d-715a-4df7-a5fd-ee8974f37277-test-volume\") pod \"a2ca6f4d-715a-4df7-a5fd-ee8974f37277\" (UID: \"a2ca6f4d-715a-4df7-a5fd-ee8974f37277\") "
	Jun 15 17:16:55 functional-822000 kubelet[7089]: I0615 17:16:55.829118    7089 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5d6vb\" (UniqueName: \"kubernetes.io/projected/a2ca6f4d-715a-4df7-a5fd-ee8974f37277-kube-api-access-5d6vb\") pod \"a2ca6f4d-715a-4df7-a5fd-ee8974f37277\" (UID: \"a2ca6f4d-715a-4df7-a5fd-ee8974f37277\") "
	Jun 15 17:16:55 functional-822000 kubelet[7089]: I0615 17:16:55.828901    7089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a2ca6f4d-715a-4df7-a5fd-ee8974f37277-test-volume" (OuterVolumeSpecName: "test-volume") pod "a2ca6f4d-715a-4df7-a5fd-ee8974f37277" (UID: "a2ca6f4d-715a-4df7-a5fd-ee8974f37277"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jun 15 17:16:55 functional-822000 kubelet[7089]: I0615 17:16:55.831823    7089 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2ca6f4d-715a-4df7-a5fd-ee8974f37277-kube-api-access-5d6vb" (OuterVolumeSpecName: "kube-api-access-5d6vb") pod "a2ca6f4d-715a-4df7-a5fd-ee8974f37277" (UID: "a2ca6f4d-715a-4df7-a5fd-ee8974f37277"). InnerVolumeSpecName "kube-api-access-5d6vb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 15 17:16:55 functional-822000 kubelet[7089]: I0615 17:16:55.929557    7089 reconciler_common.go:300] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/a2ca6f4d-715a-4df7-a5fd-ee8974f37277-test-volume\") on node \"functional-822000\" DevicePath \"\""
	Jun 15 17:16:55 functional-822000 kubelet[7089]: I0615 17:16:55.929574    7089 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5d6vb\" (UniqueName: \"kubernetes.io/projected/a2ca6f4d-715a-4df7-a5fd-ee8974f37277-kube-api-access-5d6vb\") on node \"functional-822000\" DevicePath \"\""
	Jun 15 17:16:56 functional-822000 kubelet[7089]: I0615 17:16:56.615661    7089 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52a3026be8d2b6ffa15a20f08c55c6a4663664d6afffaa78c3b561c7b6146548"
	Jun 15 17:16:58 functional-822000 kubelet[7089]: I0615 17:16:58.553146    7089 scope.go:115] "RemoveContainer" containerID="2a70aae686a17a5602dfca9ab6d899fa5d598714d391c21cf62026762a992064"
	Jun 15 17:16:59 functional-822000 kubelet[7089]: I0615 17:16:59.698732    7089 scope.go:115] "RemoveContainer" containerID="2a70aae686a17a5602dfca9ab6d899fa5d598714d391c21cf62026762a992064"
	Jun 15 17:16:59 functional-822000 kubelet[7089]: I0615 17:16:59.699150    7089 scope.go:115] "RemoveContainer" containerID="e14b7ceab190b9c7635bdb2028a741117d52015660950395ba7e7a222f933b6d"
	Jun 15 17:16:59 functional-822000 kubelet[7089]: E0615 17:16:59.699466    7089 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-7b684b55f9-hj7g8_default(546a4f77-389e-406e-a10d-4e3e67479f3c)\"" pod="default/hello-node-7b684b55f9-hj7g8" podUID=546a4f77-389e-406e-a10d-4e3e67479f3c
	Jun 15 17:17:01 functional-822000 kubelet[7089]: I0615 17:17:01.552742    7089 scope.go:115] "RemoveContainer" containerID="b47a7bf7af32f2c05be43a14b7c475b9fbe761061001bc0482a4f3322360cfb4"
	Jun 15 17:17:01 functional-822000 kubelet[7089]: E0615 17:17:01.555349    7089 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-58d66798bb-69rxt_default(2df51cd2-c003-4e7a-aee9-ae1934b81b32)\"" pod="default/hello-node-connect-58d66798bb-69rxt" podUID=2df51cd2-c003-4e7a-aee9-ae1934b81b32
	
	* 
	* ==> storage-provisioner [1ccd31818e92] <==
	* I0615 17:15:10.144525       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0615 17:15:10.148692       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0615 17:15:10.148716       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0615 17:15:10.151905       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0615 17:15:10.151986       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-822000_5ff9b0d9-99d4-453b-80ab-5d0633f4b2b3!
	I0615 17:15:10.152299       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0f9cd260-7d9e-4129-8da6-eaa8c151f696", APIVersion:"v1", ResourceVersion:"436", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-822000_5ff9b0d9-99d4-453b-80ab-5d0633f4b2b3 became leader
	I0615 17:15:10.253046       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-822000_5ff9b0d9-99d4-453b-80ab-5d0633f4b2b3!
	
	* 
	* ==> storage-provisioner [ceb47dd9032e] <==
	* I0615 17:15:53.175903       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0615 17:15:53.183091       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0615 17:15:53.183215       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0615 17:16:10.590433       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0615 17:16:10.590685       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-822000_0a533726-fe9e-4ffe-9f1e-5234d0e4e150!
	I0615 17:16:10.591918       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0f9cd260-7d9e-4129-8da6-eaa8c151f696", APIVersion:"v1", ResourceVersion:"608", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-822000_0a533726-fe9e-4ffe-9f1e-5234d0e4e150 became leader
	I0615 17:16:10.691858       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-822000_0a533726-fe9e-4ffe-9f1e-5234d0e4e150!
	I0615 17:16:22.460265       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0615 17:16:22.460335       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    ab89b995-fe1a-4505-a167-15d1c5a0e8ee 341 0 2023-06-15 17:14:45 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-06-15 17:14:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-926a76a5-8669-420d-9798-a42c3daa9a6f &PersistentVolumeClaim{ObjectMeta:{myclaim  default  926a76a5-8669-420d-9798-a42c3daa9a6f 666 0 2023-06-15 17:16:22 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-06-15 17:16:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-06-15 17:16:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0615 17:16:22.461810       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-926a76a5-8669-420d-9798-a42c3daa9a6f" provisioned
	I0615 17:16:22.461846       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0615 17:16:22.461855       1 volume_store.go:212] Trying to save persistentvolume "pvc-926a76a5-8669-420d-9798-a42c3daa9a6f"
	I0615 17:16:22.462306       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"926a76a5-8669-420d-9798-a42c3daa9a6f", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0615 17:16:22.466717       1 volume_store.go:219] persistentvolume "pvc-926a76a5-8669-420d-9798-a42c3daa9a6f" saved
	I0615 17:16:22.468422       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"926a76a5-8669-420d-9798-a42c3daa9a6f", APIVersion:"v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-926a76a5-8669-420d-9798-a42c3daa9a6f
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-822000 -n functional-822000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-822000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-822000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-822000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-822000/192.168.105.4
	Start Time:       Thu, 15 Jun 2023 10:16:50 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://dc6cb233a840f3d32c92fdee5080c4fdb6da9a181367ef3f647566e29e32b5a0
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 15 Jun 2023 10:16:53 -0700
	      Finished:     Thu, 15 Jun 2023 10:16:53 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5d6vb (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-5d6vb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  13s   default-scheduler  Successfully assigned default/busybox-mount to functional-822000
	  Normal  Pulling    12s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.401140641s (2.401149308s including waiting)
	  Normal  Created    10s   kubelet            Created container mount-munger
	  Normal  Started    10s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (34.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-822000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-822000 image ls --format yaml --alsologtostderr:
I0615 10:17:21.206153    3115 out.go:296] Setting OutFile to fd 1 ...
I0615 10:17:21.206331    3115 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0615 10:17:21.206334    3115 out.go:309] Setting ErrFile to fd 2...
I0615 10:17:21.206337    3115 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0615 10:17:21.206417    3115 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
I0615 10:17:21.206808    3115 config.go:182] Loaded profile config "functional-822000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0615 10:17:21.206868    3115 config.go:182] Loaded profile config "functional-822000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
W0615 10:17:21.207108    3115 cache_images.go:695] error getting status for functional-822000: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/functional-822000/monitor: connect: connection refused
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-116000
image_test.go:105: failed to pass build-args with args: "out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-116000" : 
-- stdout --
	Sending build context to Docker daemon  2.048kB
	Step 1/5 : FROM gcr.io/google-containers/alpine-with-bash:1.0
	 ---> 822c13824dc2
	Step 2/5 : ARG ENV_A
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in e2b387509208
	Removing intermediate container e2b387509208
	 ---> 56da676c78ad
	Step 3/5 : ARG ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 02788af3b11d
	Removing intermediate container 02788af3b11d
	 ---> 12166db59caf
	Step 4/5 : RUN echo "test-build-arg" $ENV_A $ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 8295f062314b
	exec /bin/sh: exec format error
	

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            Install the buildx component to build images with BuildKit:
	            https://docs.docker.com/go/buildx/
	
	The command '/bin/sh -c echo "test-build-arg" $ENV_A $ENV_B' returned a non-zero code: 1

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-116000 -n image-116000
helpers_test.go:244: <<< TestImageBuild/serial/BuildWithBuildArg FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestImageBuild/serial/BuildWithBuildArg]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p image-116000 logs -n 25
helpers_test.go:252: TestImageBuild/serial/BuildWithBuildArg logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                           Args                           |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| docker-env     | functional-822000 docker-env                             | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	| docker-env     | functional-822000 docker-env                             | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	| ssh            | functional-822000 ssh sudo cat                           | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	|                | /etc/test/nested/copy/1313/hosts                         |                   |         |         |                     |                     |
	| update-context | functional-822000                                        | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| update-context | functional-822000                                        | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| update-context | functional-822000                                        | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| image          | functional-822000 image load --daemon                    | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-822000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-822000 image ls                               | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	| image          | functional-822000 image save                             | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-822000 |                   |         |         |                     |                     |
	|                | /Users/jenkins/workspace/addon-resizer-save.tar          |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-822000 image rm                               | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-822000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-822000 image ls                               | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	| image          | functional-822000 image load                             | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	|                | /Users/jenkins/workspace/addon-resizer-save.tar          |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-822000 image ls                               | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	| image          | functional-822000 image save --daemon                    | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-822000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-822000                                        | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	|                | image ls --format short                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-822000                                        | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	|                | image ls --format yaml                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| ssh            | functional-822000 ssh pgrep                              | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT |                     |
	|                | buildkitd                                                |                   |         |         |                     |                     |
	| image          | functional-822000                                        | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	|                | image ls --format json                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-822000 image build -t                         | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	|                | localhost/my-image:functional-822000                     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                         |                   |         |         |                     |                     |
	| image          | functional-822000                                        | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	|                | image ls --format table                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-822000 image ls                               | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	| delete         | -p functional-822000                                     | functional-822000 | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	| start          | -p image-116000 --driver=qemu2                           | image-116000      | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	|                |                                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                                      | image-116000      | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	|                | ./testdata/image-build/test-normal                       |                   |         |         |                     |                     |
	|                | -p image-116000                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                                      | image-116000      | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str                 |                   |         |         |                     |                     |
	|                | --build-opt=no-cache                                     |                   |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p                       |                   |         |         |                     |                     |
	|                | image-116000                                             |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/15 10:17:24
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0615 10:17:24.133976    3140 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:17:24.134096    3140 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:17:24.134098    3140 out.go:309] Setting ErrFile to fd 2...
	I0615 10:17:24.134100    3140 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:17:24.134169    3140 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:17:24.135198    3140 out.go:303] Setting JSON to false
	I0615 10:17:24.151247    3140 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2815,"bootTime":1686846629,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:17:24.151308    3140 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:17:24.154144    3140 out.go:177] * [image-116000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:17:24.162195    3140 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:17:24.162214    3140 notify.go:220] Checking for updates...
	I0615 10:17:24.169122    3140 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:17:24.172103    3140 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:17:24.175129    3140 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:17:24.178122    3140 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:17:24.181131    3140 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:17:24.184210    3140 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:17:24.188085    3140 out.go:177] * Using the qemu2 driver based on user configuration
	I0615 10:17:24.194989    3140 start.go:297] selected driver: qemu2
	I0615 10:17:24.194993    3140 start.go:884] validating driver "qemu2" against <nil>
	I0615 10:17:24.194999    3140 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:17:24.195077    3140 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0615 10:17:24.198024    3140 out.go:177] * Automatically selected the socket_vmnet network
	I0615 10:17:24.203404    3140 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0615 10:17:24.203502    3140 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0615 10:17:24.203514    3140 cni.go:84] Creating CNI manager for ""
	I0615 10:17:24.203518    3140 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 10:17:24.203521    3140 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0615 10:17:24.203527    3140 start_flags.go:319] config:
	{Name:image-116000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:image-116000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:17:24.203623    3140 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:17:24.210009    3140 out.go:177] * Starting control plane node image-116000 in cluster image-116000
	I0615 10:17:24.212944    3140 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 10:17:24.212978    3140 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 10:17:24.212986    3140 cache.go:57] Caching tarball of preloaded images
	I0615 10:17:24.213042    3140 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 10:17:24.213046    3140 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 10:17:24.213245    3140 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/image-116000/config.json ...
	I0615 10:17:24.213259    3140 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/image-116000/config.json: {Name:mk3c67ec30bf3797abdc8d63e2a50d1330cbcf04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:17:24.213472    3140 start.go:365] acquiring machines lock for image-116000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:17:24.213500    3140 start.go:369] acquired machines lock for "image-116000" in 24.958µs
	I0615 10:17:24.213511    3140 start.go:93] Provisioning new machine with config: &{Name:image-116000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.3 ClusterName:image-116000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:17:24.213532    3140 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:17:24.220956    3140 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0615 10:17:24.244717    3140 start.go:159] libmachine.API.Create for "image-116000" (driver="qemu2")
	I0615 10:17:24.244739    3140 client.go:168] LocalClient.Create starting
	I0615 10:17:24.244792    3140 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:17:24.244827    3140 main.go:141] libmachine: Decoding PEM data...
	I0615 10:17:24.244834    3140 main.go:141] libmachine: Parsing certificate...
	I0615 10:17:24.244875    3140 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:17:24.244888    3140 main.go:141] libmachine: Decoding PEM data...
	I0615 10:17:24.244894    3140 main.go:141] libmachine: Parsing certificate...
	I0615 10:17:24.245178    3140 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:17:24.676624    3140 main.go:141] libmachine: Creating SSH key...
	I0615 10:17:24.849247    3140 main.go:141] libmachine: Creating Disk image...
	I0615 10:17:24.849253    3140 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:17:24.849438    3140 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/image-116000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/image-116000/disk.qcow2
	I0615 10:17:24.863372    3140 main.go:141] libmachine: STDOUT: 
	I0615 10:17:24.863387    3140 main.go:141] libmachine: STDERR: 
	I0615 10:17:24.863453    3140 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/image-116000/disk.qcow2 +20000M
	I0615 10:17:24.870801    3140 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:17:24.870821    3140 main.go:141] libmachine: STDERR: 
	I0615 10:17:24.870841    3140 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/image-116000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/image-116000/disk.qcow2
	I0615 10:17:24.870848    3140 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:17:24.870886    3140 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/image-116000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/image-116000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/image-116000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:28:2d:d8:f3:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/image-116000/disk.qcow2
	I0615 10:17:24.906589    3140 main.go:141] libmachine: STDOUT: 
	I0615 10:17:24.906604    3140 main.go:141] libmachine: STDERR: 
	I0615 10:17:24.906607    3140 main.go:141] libmachine: Attempt 0
	I0615 10:17:24.906615    3140 main.go:141] libmachine: Searching for 4a:28:2d:d8:f3:b7 in /var/db/dhcpd_leases ...
	I0615 10:17:24.906840    3140 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0615 10:17:24.906857    3140 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ee:b2:d2:4f:3:e3 ID:1,ee:b2:d2:4f:3:e3 Lease:0x648c9867}
	I0615 10:17:24.906866    3140 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ae:3:40:f2:c1:7 ID:1,ae:3:40:f2:c1:7 Lease:0x648b46da}
	I0615 10:17:24.906870    3140 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:25:cc:f:2e:6f ID:1,1a:25:cc:f:2e:6f Lease:0x648c8ed9}
	I0615 10:17:26.904485    3140 main.go:141] libmachine: Attempt 1
	I0615 10:17:26.904528    3140 main.go:141] libmachine: Searching for 4a:28:2d:d8:f3:b7 in /var/db/dhcpd_leases ...
	I0615 10:17:26.904873    3140 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0615 10:17:26.904913    3140 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ee:b2:d2:4f:3:e3 ID:1,ee:b2:d2:4f:3:e3 Lease:0x648c9867}
	I0615 10:17:26.904973    3140 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ae:3:40:f2:c1:7 ID:1,ae:3:40:f2:c1:7 Lease:0x648b46da}
	I0615 10:17:26.905021    3140 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:25:cc:f:2e:6f ID:1,1a:25:cc:f:2e:6f Lease:0x648c8ed9}
	I0615 10:17:28.903188    3140 main.go:141] libmachine: Attempt 2
	I0615 10:17:28.903202    3140 main.go:141] libmachine: Searching for 4a:28:2d:d8:f3:b7 in /var/db/dhcpd_leases ...
	I0615 10:17:28.903325    3140 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0615 10:17:28.903336    3140 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ee:b2:d2:4f:3:e3 ID:1,ee:b2:d2:4f:3:e3 Lease:0x648c9867}
	I0615 10:17:28.903341    3140 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ae:3:40:f2:c1:7 ID:1,ae:3:40:f2:c1:7 Lease:0x648b46da}
	I0615 10:17:28.903345    3140 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:25:cc:f:2e:6f ID:1,1a:25:cc:f:2e:6f Lease:0x648c8ed9}
	I0615 10:17:30.901865    3140 main.go:141] libmachine: Attempt 3
	I0615 10:17:30.901870    3140 main.go:141] libmachine: Searching for 4a:28:2d:d8:f3:b7 in /var/db/dhcpd_leases ...
	I0615 10:17:30.901901    3140 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0615 10:17:30.901906    3140 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ee:b2:d2:4f:3:e3 ID:1,ee:b2:d2:4f:3:e3 Lease:0x648c9867}
	I0615 10:17:30.901911    3140 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ae:3:40:f2:c1:7 ID:1,ae:3:40:f2:c1:7 Lease:0x648b46da}
	I0615 10:17:30.901916    3140 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:25:cc:f:2e:6f ID:1,1a:25:cc:f:2e:6f Lease:0x648c8ed9}
	I0615 10:17:32.900904    3140 main.go:141] libmachine: Attempt 4
	I0615 10:17:32.900914    3140 main.go:141] libmachine: Searching for 4a:28:2d:d8:f3:b7 in /var/db/dhcpd_leases ...
	I0615 10:17:32.901031    3140 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0615 10:17:32.901042    3140 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ee:b2:d2:4f:3:e3 ID:1,ee:b2:d2:4f:3:e3 Lease:0x648c9867}
	I0615 10:17:32.901077    3140 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ae:3:40:f2:c1:7 ID:1,ae:3:40:f2:c1:7 Lease:0x648b46da}
	I0615 10:17:32.901081    3140 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:25:cc:f:2e:6f ID:1,1a:25:cc:f:2e:6f Lease:0x648c8ed9}
	I0615 10:17:34.900435    3140 main.go:141] libmachine: Attempt 5
	I0615 10:17:34.900444    3140 main.go:141] libmachine: Searching for 4a:28:2d:d8:f3:b7 in /var/db/dhcpd_leases ...
	I0615 10:17:34.900507    3140 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0615 10:17:34.900515    3140 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ee:b2:d2:4f:3:e3 ID:1,ee:b2:d2:4f:3:e3 Lease:0x648c9867}
	I0615 10:17:34.900519    3140 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ae:3:40:f2:c1:7 ID:1,ae:3:40:f2:c1:7 Lease:0x648b46da}
	I0615 10:17:34.900523    3140 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:25:cc:f:2e:6f ID:1,1a:25:cc:f:2e:6f Lease:0x648c8ed9}
	I0615 10:17:36.900203    3140 main.go:141] libmachine: Attempt 6
	I0615 10:17:36.900217    3140 main.go:141] libmachine: Searching for 4a:28:2d:d8:f3:b7 in /var/db/dhcpd_leases ...
	I0615 10:17:36.900332    3140 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0615 10:17:36.900344    3140 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:4a:28:2d:d8:f3:b7 ID:1,4a:28:2d:d8:f3:b7 Lease:0x648c992f}
	I0615 10:17:36.900347    3140 main.go:141] libmachine: Found match: 4a:28:2d:d8:f3:b7
	I0615 10:17:36.900358    3140 main.go:141] libmachine: IP: 192.168.105.5
	I0615 10:17:36.900362    3140 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0615 10:17:37.905248    3140 machine.go:88] provisioning docker machine ...
	I0615 10:17:37.905273    3140 buildroot.go:166] provisioning hostname "image-116000"
	I0615 10:17:37.905342    3140 main.go:141] libmachine: Using SSH client type: native
	I0615 10:17:37.905661    3140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104fc8e20] 0x104fcb880 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0615 10:17:37.905665    3140 main.go:141] libmachine: About to run SSH command:
	sudo hostname image-116000 && echo "image-116000" | sudo tee /etc/hostname
	I0615 10:17:37.928740    3140 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0615 10:17:41.026890    3140 main.go:141] libmachine: SSH cmd err, output: <nil>: image-116000
	
	I0615 10:17:41.027003    3140 main.go:141] libmachine: Using SSH client type: native
	I0615 10:17:41.027497    3140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104fc8e20] 0x104fcb880 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0615 10:17:41.027509    3140 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\simage-116000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 image-116000/g' /etc/hosts;
				else 
					echo '127.0.1.1 image-116000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0615 10:17:41.100431    3140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0615 10:17:41.100443    3140 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16718-868/.minikube CaCertPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16718-868/.minikube}
	I0615 10:17:41.100459    3140 buildroot.go:174] setting up certificates
	I0615 10:17:41.100468    3140 provision.go:83] configureAuth start
	I0615 10:17:41.100473    3140 provision.go:138] copyHostCerts
	I0615 10:17:41.100599    3140 exec_runner.go:144] found /Users/jenkins/minikube-integration/16718-868/.minikube/ca.pem, removing ...
	I0615 10:17:41.100605    3140 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16718-868/.minikube/ca.pem
	I0615 10:17:41.100783    3140 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16718-868/.minikube/ca.pem (1078 bytes)
	I0615 10:17:41.101069    3140 exec_runner.go:144] found /Users/jenkins/minikube-integration/16718-868/.minikube/cert.pem, removing ...
	I0615 10:17:41.101072    3140 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16718-868/.minikube/cert.pem
	I0615 10:17:41.101134    3140 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16718-868/.minikube/cert.pem (1123 bytes)
	I0615 10:17:41.101277    3140 exec_runner.go:144] found /Users/jenkins/minikube-integration/16718-868/.minikube/key.pem, removing ...
	I0615 10:17:41.101279    3140 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16718-868/.minikube/key.pem
	I0615 10:17:41.101331    3140 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16718-868/.minikube/key.pem (1679 bytes)
	I0615 10:17:41.101452    3140 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16718-868/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca-key.pem org=jenkins.image-116000 san=[192.168.105.5 192.168.105.5 localhost 127.0.0.1 minikube image-116000]
	I0615 10:17:41.187792    3140 provision.go:172] copyRemoteCerts
	I0615 10:17:41.187835    3140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0615 10:17:41.187840    3140 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/image-116000/id_rsa Username:docker}
	I0615 10:17:41.222171    3140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0615 10:17:41.229130    3140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0615 10:17:41.235849    3140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0615 10:17:41.242591    3140 provision.go:86] duration metric: configureAuth took 142.239667ms
	I0615 10:17:41.242596    3140 buildroot.go:189] setting minikube options for container-runtime
	I0615 10:17:41.242697    3140 config.go:182] Loaded profile config "image-116000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:17:41.242731    3140 main.go:141] libmachine: Using SSH client type: native
	I0615 10:17:41.242949    3140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104fc8e20] 0x104fcb880 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0615 10:17:41.242952    3140 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0615 10:17:41.303993    3140 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0615 10:17:41.304003    3140 buildroot.go:70] root file system type: tmpfs
	I0615 10:17:41.304066    3140 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0615 10:17:41.304124    3140 main.go:141] libmachine: Using SSH client type: native
	I0615 10:17:41.304391    3140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104fc8e20] 0x104fcb880 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0615 10:17:41.304427    3140 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0615 10:17:41.372389    3140 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0615 10:17:41.372441    3140 main.go:141] libmachine: Using SSH client type: native
	I0615 10:17:41.372716    3140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104fc8e20] 0x104fcb880 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0615 10:17:41.372724    3140 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0615 10:17:41.727731    3140 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0615 10:17:41.727741    3140 machine.go:91] provisioned docker machine in 3.826059125s
	I0615 10:17:41.727745    3140 client.go:171] LocalClient.Create took 17.509587458s
	I0615 10:17:41.727758    3140 start.go:167] duration metric: libmachine.API.Create for "image-116000" took 17.50963175s
	I0615 10:17:41.727761    3140 start.go:300] post-start starting for "image-116000" (driver="qemu2")
	I0615 10:17:41.727765    3140 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0615 10:17:41.727836    3140 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0615 10:17:41.727843    3140 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/image-116000/id_rsa Username:docker}
	I0615 10:17:41.760794    3140 ssh_runner.go:195] Run: cat /etc/os-release
	I0615 10:17:41.762240    3140 info.go:137] Remote host: Buildroot 2021.02.12
	I0615 10:17:41.762245    3140 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16718-868/.minikube/addons for local assets ...
	I0615 10:17:41.762311    3140 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16718-868/.minikube/files for local assets ...
	I0615 10:17:41.762414    3140 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16718-868/.minikube/files/etc/ssl/certs/13132.pem -> 13132.pem in /etc/ssl/certs
	I0615 10:17:41.762532    3140 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0615 10:17:41.765384    3140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/files/etc/ssl/certs/13132.pem --> /etc/ssl/certs/13132.pem (1708 bytes)
	I0615 10:17:41.772021    3140 start.go:303] post-start completed in 44.293209ms
	I0615 10:17:41.772392    3140 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/image-116000/config.json ...
	I0615 10:17:41.772536    3140 start.go:128] duration metric: createHost completed in 17.585699833s
	I0615 10:17:41.772566    3140 main.go:141] libmachine: Using SSH client type: native
	I0615 10:17:41.772784    3140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104fc8e20] 0x104fcb880 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0615 10:17:41.772787    3140 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0615 10:17:41.830052    3140 main.go:141] libmachine: SSH cmd err, output: <nil>: 1686849461.577368711
	
	I0615 10:17:41.830056    3140 fix.go:206] guest clock: 1686849461.577368711
	I0615 10:17:41.830059    3140 fix.go:219] Guest: 2023-06-15 10:17:41.577368711 -0700 PDT Remote: 2023-06-15 10:17:41.772541 -0700 PDT m=+17.685312585 (delta=-195.172289ms)
	I0615 10:17:41.830070    3140 fix.go:190] guest clock delta is within tolerance: -195.172289ms
	I0615 10:17:41.830072    3140 start.go:83] releasing machines lock for "image-116000", held for 17.643316458s
	I0615 10:17:41.830367    3140 ssh_runner.go:195] Run: cat /version.json
	I0615 10:17:41.830373    3140 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/image-116000/id_rsa Username:docker}
	I0615 10:17:41.830381    3140 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0615 10:17:41.830396    3140 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/image-116000/id_rsa Username:docker}
	I0615 10:17:41.906137    3140 ssh_runner.go:195] Run: systemctl --version
	I0615 10:17:41.908301    3140 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0615 10:17:41.910126    3140 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0615 10:17:41.910159    3140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0615 10:17:41.915641    3140 cni.go:268] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0615 10:17:41.915646    3140 start.go:466] detecting cgroup driver to use...
	I0615 10:17:41.915738    3140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0615 10:17:41.921497    3140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0615 10:17:41.925033    3140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0615 10:17:41.928345    3140 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0615 10:17:41.928367    3140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0615 10:17:41.931316    3140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0615 10:17:41.934221    3140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0615 10:17:41.937638    3140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0615 10:17:41.940938    3140 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0615 10:17:41.944352    3140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0615 10:17:41.947253    3140 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0615 10:17:41.950176    3140 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0615 10:17:41.953927    3140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 10:17:42.036569    3140 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0615 10:17:42.046400    3140 start.go:466] detecting cgroup driver to use...
	I0615 10:17:42.046477    3140 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0615 10:17:42.053648    3140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0615 10:17:42.061541    3140 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0615 10:17:42.071842    3140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0615 10:17:42.076539    3140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0615 10:17:42.080988    3140 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0615 10:17:42.129160    3140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0615 10:17:42.134786    3140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0615 10:17:42.140777    3140 ssh_runner.go:195] Run: which cri-dockerd
	I0615 10:17:42.142039    3140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0615 10:17:42.145033    3140 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0615 10:17:42.150029    3140 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0615 10:17:42.225967    3140 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0615 10:17:42.305403    3140 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0615 10:17:42.305411    3140 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0615 10:17:42.310951    3140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 10:17:42.386610    3140 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0615 10:17:43.545767    3140 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.160030917s)
	I0615 10:17:43.545837    3140 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0615 10:17:43.621468    3140 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0615 10:17:43.696614    3140 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0615 10:17:43.771769    3140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 10:17:43.853349    3140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0615 10:17:43.860788    3140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 10:17:43.940646    3140 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0615 10:17:43.964456    3140 start.go:513] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0615 10:17:43.964553    3140 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0615 10:17:43.966642    3140 start.go:534] Will wait 60s for crictl version
	I0615 10:17:43.966686    3140 ssh_runner.go:195] Run: which crictl
	I0615 10:17:43.968123    3140 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0615 10:17:43.983856    3140 start.go:550] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0615 10:17:43.983950    3140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0615 10:17:43.993646    3140 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0615 10:17:44.017091    3140 out.go:204] * Preparing Kubernetes v1.27.3 on Docker 24.0.2 ...
	I0615 10:17:44.017241    3140 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0615 10:17:44.018691    3140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0615 10:17:44.022574    3140 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 10:17:44.022617    3140 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0615 10:17:44.028367    3140 docker.go:636] Got preloaded images: 
	I0615 10:17:44.028371    3140 docker.go:642] registry.k8s.io/kube-apiserver:v1.27.3 wasn't preloaded
	I0615 10:17:44.028413    3140 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0615 10:17:44.031787    3140 ssh_runner.go:195] Run: which lz4
	I0615 10:17:44.033430    3140 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0615 10:17:44.034710    3140 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0615 10:17:44.034722    3140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635922 bytes)
	I0615 10:17:45.327062    3140 docker.go:600] Took 1.294579 seconds to copy over tarball
	I0615 10:17:45.327115    3140 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0615 10:17:46.353247    3140 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.026772333s)
	I0615 10:17:46.353256    3140 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0615 10:17:46.368973    3140 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0615 10:17:46.371930    3140 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0615 10:17:46.376846    3140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 10:17:46.447850    3140 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0615 10:17:47.921662    3140 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.474669792s)
	I0615 10:17:47.921755    3140 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0615 10:17:47.927948    3140 docker.go:636] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.3
	registry.k8s.io/kube-controller-manager:v1.27.3
	registry.k8s.io/kube-scheduler:v1.27.3
	registry.k8s.io/kube-proxy:v1.27.3
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0615 10:17:47.927953    3140 cache_images.go:84] Images are preloaded, skipping loading
	I0615 10:17:47.928000    3140 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0615 10:17:47.935921    3140 cni.go:84] Creating CNI manager for ""
	I0615 10:17:47.935927    3140 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 10:17:47.935937    3140 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0615 10:17:47.935945    3140 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.27.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:image-116000 NodeName:image-116000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0615 10:17:47.936011    3140 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "image-116000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0615 10:17:47.936040    3140 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=image-116000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.3 ClusterName:image-116000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0615 10:17:47.936094    3140 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.3
	I0615 10:17:47.939046    3140 binaries.go:44] Found k8s binaries, skipping transfer
	I0615 10:17:47.939072    3140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0615 10:17:47.941801    3140 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0615 10:17:47.947008    3140 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0615 10:17:47.951824    3140 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0615 10:17:47.957031    3140 ssh_runner.go:195] Run: grep 192.168.105.5	control-plane.minikube.internal$ /etc/hosts
	I0615 10:17:47.958492    3140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0615 10:17:47.962168    3140 certs.go:56] Setting up /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/image-116000 for IP: 192.168.105.5
	I0615 10:17:47.962178    3140 certs.go:190] acquiring lock for shared ca certs: {Name:mk9ee4d7ca68f2cc32c8609d33f6ce33c43a91d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:17:47.962313    3140 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key
	I0615 10:17:47.963266    3140 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key
	I0615 10:17:47.963294    3140 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/image-116000/client.key
	I0615 10:17:47.963300    3140 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/image-116000/client.crt with IP's: []
	I0615 10:17:48.089406    3140 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/image-116000/client.crt ...
	I0615 10:17:48.089410    3140 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/image-116000/client.crt: {Name:mk1c8956fda13298200e3e5c7fe47b7fb1ff5595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:17:48.089628    3140 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/image-116000/client.key ...
	I0615 10:17:48.089630    3140 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/image-116000/client.key: {Name:mkf68633e7d119b4e15c60b3e39d9c727e397cb9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:17:48.089742    3140 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/image-116000/apiserver.key.e69b33ca
	I0615 10:17:48.089747    3140 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/image-116000/apiserver.crt.e69b33ca with IP's: [192.168.105.5 10.96.0.1 127.0.0.1 10.0.0.1]
	I0615 10:17:48.200009    3140 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/image-116000/apiserver.crt.e69b33ca ...
	I0615 10:17:48.200011    3140 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/image-116000/apiserver.crt.e69b33ca: {Name:mk5aeea937d83f4f34511d49746132f0e33fe6f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:17:48.200161    3140 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/image-116000/apiserver.key.e69b33ca ...
	I0615 10:17:48.200163    3140 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/image-116000/apiserver.key.e69b33ca: {Name:mka04beb5adaf045b61bf39d74b23cf8c0431898 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:17:48.200266    3140 certs.go:337] copying /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/image-116000/apiserver.crt.e69b33ca -> /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/image-116000/apiserver.crt
	I0615 10:17:48.200470    3140 certs.go:341] copying /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/image-116000/apiserver.key.e69b33ca -> /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/image-116000/apiserver.key
	I0615 10:17:48.200570    3140 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/image-116000/proxy-client.key
	I0615 10:17:48.200576    3140 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/image-116000/proxy-client.crt with IP's: []
	I0615 10:17:48.301180    3140 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/image-116000/proxy-client.crt ...
	I0615 10:17:48.301184    3140 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/image-116000/proxy-client.crt: {Name:mkb093b130ca1cc62da5be681716ee2627548b91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:17:48.301394    3140 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/image-116000/proxy-client.key ...
	I0615 10:17:48.301396    3140 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/image-116000/proxy-client.key: {Name:mka63155437cf3cb3dbe2cda62c4030457fc6524 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:17:48.301635    3140 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/1313.pem (1338 bytes)
	W0615 10:17:48.302067    3140 certs.go:433] ignoring /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/1313_empty.pem, impossibly tiny 0 bytes
	I0615 10:17:48.302076    3140 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca-key.pem (1679 bytes)
	I0615 10:17:48.302102    3140 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem (1078 bytes)
	I0615 10:17:48.302124    3140 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem (1123 bytes)
	I0615 10:17:48.302140    3140 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/key.pem (1679 bytes)
	I0615 10:17:48.302197    3140 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16718-868/.minikube/files/etc/ssl/certs/13132.pem (1708 bytes)
	I0615 10:17:48.302481    3140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/image-116000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0615 10:17:48.309755    3140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/image-116000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0615 10:17:48.317079    3140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/image-116000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0615 10:17:48.324375    3140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/image-116000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0615 10:17:48.331504    3140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0615 10:17:48.338312    3140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0615 10:17:48.345548    3140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0615 10:17:48.352795    3140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0615 10:17:48.359779    3140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/files/etc/ssl/certs/13132.pem --> /usr/share/ca-certificates/13132.pem (1708 bytes)
	I0615 10:17:48.366379    3140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0615 10:17:48.373313    3140 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/certs/1313.pem --> /usr/share/ca-certificates/1313.pem (1338 bytes)
	I0615 10:17:48.380667    3140 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0615 10:17:48.385946    3140 ssh_runner.go:195] Run: openssl version
	I0615 10:17:48.388091    3140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13132.pem && ln -fs /usr/share/ca-certificates/13132.pem /etc/ssl/certs/13132.pem"
	I0615 10:17:48.391309    3140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13132.pem
	I0615 10:17:48.392870    3140 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 15 17:14 /usr/share/ca-certificates/13132.pem
	I0615 10:17:48.392890    3140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13132.pem
	I0615 10:17:48.394911    3140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13132.pem /etc/ssl/certs/3ec20f2e.0"
	I0615 10:17:48.397980    3140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0615 10:17:48.401339    3140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0615 10:17:48.402936    3140 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 15 16:33 /usr/share/ca-certificates/minikubeCA.pem
	I0615 10:17:48.402953    3140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0615 10:17:48.404771    3140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0615 10:17:48.408032    3140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1313.pem && ln -fs /usr/share/ca-certificates/1313.pem /etc/ssl/certs/1313.pem"
	I0615 10:17:48.411099    3140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1313.pem
	I0615 10:17:48.412602    3140 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 15 17:14 /usr/share/ca-certificates/1313.pem
	I0615 10:17:48.412617    3140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1313.pem
	I0615 10:17:48.414595    3140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1313.pem /etc/ssl/certs/51391683.0"
	I0615 10:17:48.417868    3140 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0615 10:17:48.419371    3140 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0615 10:17:48.419402    3140 kubeadm.go:404] StartCluster: {Name:image-116000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.27.3 ClusterName:image-116000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:17:48.419468    3140 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0615 10:17:48.424844    3140 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0615 10:17:48.428245    3140 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0615 10:17:48.431096    3140 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0615 10:17:48.433900    3140 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0615 10:17:48.433910    3140 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0615 10:17:48.456814    3140 kubeadm.go:322] [init] Using Kubernetes version: v1.27.3
	I0615 10:17:48.456838    3140 kubeadm.go:322] [preflight] Running pre-flight checks
	I0615 10:17:48.522601    3140 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0615 10:17:48.522668    3140 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0615 10:17:48.522717    3140 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0615 10:17:48.581276    3140 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0615 10:17:48.586531    3140 out.go:204]   - Generating certificates and keys ...
	I0615 10:17:48.586575    3140 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0615 10:17:48.586603    3140 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0615 10:17:48.672467    3140 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0615 10:17:48.827517    3140 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0615 10:17:48.960861    3140 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0615 10:17:49.040121    3140 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0615 10:17:49.077402    3140 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0615 10:17:49.077464    3140 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [image-116000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0615 10:17:49.362851    3140 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0615 10:17:49.362913    3140 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [image-116000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0615 10:17:49.487202    3140 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0615 10:17:49.588503    3140 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0615 10:17:49.651703    3140 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0615 10:17:49.651729    3140 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0615 10:17:49.708506    3140 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0615 10:17:49.811431    3140 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0615 10:17:49.971538    3140 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0615 10:17:50.062650    3140 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0615 10:17:50.069312    3140 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0615 10:17:50.069357    3140 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0615 10:17:50.069382    3140 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0615 10:17:50.140360    3140 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0615 10:17:50.143529    3140 out.go:204]   - Booting up control plane ...
	I0615 10:17:50.143604    3140 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0615 10:17:50.143643    3140 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0615 10:17:50.143679    3140 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0615 10:17:50.143723    3140 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0615 10:17:50.144527    3140 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0615 10:17:53.647301    3140 kubeadm.go:322] [apiclient] All control plane components are healthy after 3.502645 seconds
	I0615 10:17:53.647357    3140 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0615 10:17:53.651554    3140 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0615 10:17:54.170684    3140 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0615 10:17:54.170957    3140 kubeadm.go:322] [mark-control-plane] Marking the node image-116000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0615 10:17:54.674656    3140 kubeadm.go:322] [bootstrap-token] Using token: p56fr7.2n38v78piw6h643x
	I0615 10:17:54.680179    3140 out.go:204]   - Configuring RBAC rules ...
	I0615 10:17:54.680239    3140 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0615 10:17:54.681188    3140 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0615 10:17:54.688469    3140 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0615 10:17:54.689702    3140 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0615 10:17:54.690831    3140 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0615 10:17:54.692076    3140 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0615 10:17:54.696014    3140 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0615 10:17:54.885594    3140 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0615 10:17:55.083320    3140 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0615 10:17:55.083821    3140 kubeadm.go:322] 
	I0615 10:17:55.083861    3140 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0615 10:17:55.083863    3140 kubeadm.go:322] 
	I0615 10:17:55.083900    3140 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0615 10:17:55.083902    3140 kubeadm.go:322] 
	I0615 10:17:55.083913    3140 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0615 10:17:55.083941    3140 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0615 10:17:55.083974    3140 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0615 10:17:55.083980    3140 kubeadm.go:322] 
	I0615 10:17:55.084009    3140 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0615 10:17:55.084011    3140 kubeadm.go:322] 
	I0615 10:17:55.084045    3140 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0615 10:17:55.084047    3140 kubeadm.go:322] 
	I0615 10:17:55.084077    3140 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0615 10:17:55.084120    3140 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0615 10:17:55.084157    3140 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0615 10:17:55.084159    3140 kubeadm.go:322] 
	I0615 10:17:55.084217    3140 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0615 10:17:55.084260    3140 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0615 10:17:55.084263    3140 kubeadm.go:322] 
	I0615 10:17:55.084308    3140 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token p56fr7.2n38v78piw6h643x \
	I0615 10:17:55.084366    3140 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:13ec1e4f42825d0267b189c0f8b830eda54e60d63681b653b209256c58602b59 \
	I0615 10:17:55.084378    3140 kubeadm.go:322] 	--control-plane 
	I0615 10:17:55.084385    3140 kubeadm.go:322] 
	I0615 10:17:55.084438    3140 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0615 10:17:55.084439    3140 kubeadm.go:322] 
	I0615 10:17:55.084493    3140 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token p56fr7.2n38v78piw6h643x \
	I0615 10:17:55.084555    3140 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:13ec1e4f42825d0267b189c0f8b830eda54e60d63681b653b209256c58602b59 
	I0615 10:17:55.084645    3140 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0615 10:17:55.084653    3140 cni.go:84] Creating CNI manager for ""
	I0615 10:17:55.084659    3140 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 10:17:55.091813    3140 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0615 10:17:55.095703    3140 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0615 10:17:55.098694    3140 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0615 10:17:55.103300    3140 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0615 10:17:55.103345    3140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:17:55.103368    3140 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.3/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=1814abc5e40384accb8747bfb7e33027343c9627 minikube.k8s.io/name=image-116000 minikube.k8s.io/updated_at=2023_06_15T10_17_55_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:17:55.108868    3140 ops.go:34] apiserver oom_adj: -16
	I0615 10:17:55.173055    3140 kubeadm.go:1081] duration metric: took 69.764541ms to wait for elevateKubeSystemPrivileges.
	I0615 10:17:55.173064    3140 kubeadm.go:406] StartCluster complete in 6.756653041s
	I0615 10:17:55.173071    3140 settings.go:142] acquiring lock: {Name:mk45a698fcd8dd8ae6984c9cf4ad4d183fdb5424 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:17:55.173151    3140 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:17:55.173490    3140 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/kubeconfig: {Name:mkbe9cac04fb467055323f2e3d5db2c6ddc287ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:17:55.173663    3140 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0615 10:17:55.173715    3140 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0615 10:17:55.173758    3140 config.go:182] Loaded profile config "image-116000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:17:55.173757    3140 addons.go:66] Setting storage-provisioner=true in profile "image-116000"
	I0615 10:17:55.173763    3140 addons.go:228] Setting addon storage-provisioner=true in "image-116000"
	I0615 10:17:55.173785    3140 host.go:66] Checking if "image-116000" exists ...
	I0615 10:17:55.173789    3140 addons.go:66] Setting default-storageclass=true in profile "image-116000"
	I0615 10:17:55.173795    3140 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "image-116000"
	I0615 10:17:55.178808    3140 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0615 10:17:55.182834    3140 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0615 10:17:55.182837    3140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0615 10:17:55.182844    3140 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/image-116000/id_rsa Username:docker}
	I0615 10:17:55.187923    3140 addons.go:228] Setting addon default-storageclass=true in "image-116000"
	I0615 10:17:55.187939    3140 host.go:66] Checking if "image-116000" exists ...
	I0615 10:17:55.188566    3140 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0615 10:17:55.188569    3140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0615 10:17:55.188575    3140 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/image-116000/id_rsa Username:docker}
	I0615 10:17:55.217688    3140 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0615 10:17:55.223880    3140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0615 10:17:55.243213    3140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0615 10:17:55.679831    3140 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0615 10:17:55.694334    3140 kapi.go:248] "coredns" deployment in "kube-system" namespace and "image-116000" context rescaled to 1 replicas
	I0615 10:17:55.694348    3140 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:17:55.702778    3140 out.go:177] * Verifying Kubernetes components...
	I0615 10:17:55.706744    3140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0615 10:17:55.765606    3140 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0615 10:17:55.762385    3140 api_server.go:52] waiting for apiserver process to appear ...
	I0615 10:17:55.773780    3140 addons.go:499] enable addons completed in 600.273792ms: enabled=[storage-provisioner default-storageclass]
	I0615 10:17:55.773809    3140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0615 10:17:55.778280    3140 api_server.go:72] duration metric: took 83.9505ms to wait for apiserver process to appear ...
	I0615 10:17:55.778283    3140 api_server.go:88] waiting for apiserver healthz status ...
	I0615 10:17:55.778288    3140 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0615 10:17:55.781164    3140 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0615 10:17:55.781724    3140 api_server.go:141] control plane version: v1.27.3
	I0615 10:17:55.781728    3140 api_server.go:131] duration metric: took 3.444125ms to wait for apiserver health ...
	I0615 10:17:55.781733    3140 system_pods.go:43] waiting for kube-system pods to appear ...
	I0615 10:17:55.784482    3140 system_pods.go:59] 5 kube-system pods found
	I0615 10:17:55.784488    3140 system_pods.go:61] "etcd-image-116000" [7a18fbcb-711d-4036-bdba-8356ce340164] Pending
	I0615 10:17:55.784490    3140 system_pods.go:61] "kube-apiserver-image-116000" [94781920-d17b-4d0f-838e-34dc1039874a] Pending
	I0615 10:17:55.784492    3140 system_pods.go:61] "kube-controller-manager-image-116000" [ba32aa71-29f2-46d3-a3ad-8fed448eb28b] Pending
	I0615 10:17:55.784494    3140 system_pods.go:61] "kube-scheduler-image-116000" [41aa0ab8-594c-4af8-93ac-8b6be18a263d] Pending
	I0615 10:17:55.784495    3140 system_pods.go:61] "storage-provisioner" [0c1d37e5-dac4-4665-8a1f-5f49e6bfb177] Pending
	I0615 10:17:55.784497    3140 system_pods.go:74] duration metric: took 2.7635ms to wait for pod list to return data ...
	I0615 10:17:55.784500    3140 kubeadm.go:581] duration metric: took 90.174084ms to wait for : map[apiserver:true system_pods:true] ...
	I0615 10:17:55.784505    3140 node_conditions.go:102] verifying NodePressure condition ...
	I0615 10:17:55.785795    3140 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0615 10:17:55.785801    3140 node_conditions.go:123] node cpu capacity is 2
	I0615 10:17:55.785806    3140 node_conditions.go:105] duration metric: took 1.3ms to run NodePressure ...
	I0615 10:17:55.785809    3140 start.go:228] waiting for startup goroutines ...
	I0615 10:17:55.785812    3140 start.go:233] waiting for cluster config update ...
	I0615 10:17:55.785816    3140 start.go:242] writing updated cluster config ...
	I0615 10:17:55.786090    3140 ssh_runner.go:195] Run: rm -f paused
	I0615 10:17:55.814589    3140 start.go:582] kubectl: 1.25.9, cluster: 1.27.3 (minor skew: 2)
	I0615 10:17:55.818767    3140 out.go:177] 
	W0615 10:17:55.822835    3140 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.3.
	I0615 10:17:55.825673    3140 out.go:177]   - Want kubectl v1.27.3? Try 'minikube kubectl -- get pods -A'
	I0615 10:17:55.833758    3140 out.go:177] * Done! kubectl is now configured to use "image-116000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-06-15 17:17:35 UTC, ends at Thu 2023-06-15 17:17:58 UTC. --
	Jun 15 17:17:50 image-116000 dockerd[1125]: time="2023-06-15T17:17:50.926810216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 17:17:50 image-116000 dockerd[1125]: time="2023-06-15T17:17:50.929235132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 15 17:17:50 image-116000 dockerd[1125]: time="2023-06-15T17:17:50.929277007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 17:17:50 image-116000 dockerd[1125]: time="2023-06-15T17:17:50.929290841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 15 17:17:50 image-116000 dockerd[1125]: time="2023-06-15T17:17:50.929299424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 17:17:50 image-116000 cri-dockerd[1014]: time="2023-06-15T17:17:50Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/e11fa938eb23227f06e8f1e28f079926a1ec33737dba8d558c2b8cfc97d4cfce/resolv.conf as [nameserver 192.168.105.1]"
	Jun 15 17:17:50 image-116000 dockerd[1125]: time="2023-06-15T17:17:50.977144674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 15 17:17:50 image-116000 dockerd[1125]: time="2023-06-15T17:17:50.977195799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 17:17:50 image-116000 dockerd[1125]: time="2023-06-15T17:17:50.977211299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 15 17:17:50 image-116000 dockerd[1125]: time="2023-06-15T17:17:50.977222632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 17:17:50 image-116000 dockerd[1125]: time="2023-06-15T17:17:50.979937466Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 15 17:17:50 image-116000 dockerd[1125]: time="2023-06-15T17:17:50.979972591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 17:17:50 image-116000 dockerd[1125]: time="2023-06-15T17:17:50.979984757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 15 17:17:50 image-116000 dockerd[1125]: time="2023-06-15T17:17:50.979996799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 17:17:57 image-116000 dockerd[1117]: time="2023-06-15T17:17:57.577235511Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Jun 15 17:17:57 image-116000 dockerd[1117]: time="2023-06-15T17:17:57.692874969Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Jun 15 17:17:57 image-116000 dockerd[1117]: time="2023-06-15T17:17:57.709048011Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Jun 15 17:17:57 image-116000 dockerd[1125]: time="2023-06-15T17:17:57.742534511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 15 17:17:57 image-116000 dockerd[1125]: time="2023-06-15T17:17:57.742564261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 17:17:57 image-116000 dockerd[1125]: time="2023-06-15T17:17:57.742584302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 15 17:17:57 image-116000 dockerd[1125]: time="2023-06-15T17:17:57.742590719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 17:17:57 image-116000 dockerd[1117]: time="2023-06-15T17:17:57.893263636Z" level=info msg="ignoring event" container=8295f062314bc4437ebb299a564f48dabe3481aff19b17db375a28569efb60e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 15 17:17:57 image-116000 dockerd[1125]: time="2023-06-15T17:17:57.893479052Z" level=info msg="shim disconnected" id=8295f062314bc4437ebb299a564f48dabe3481aff19b17db375a28569efb60e8 namespace=moby
	Jun 15 17:17:57 image-116000 dockerd[1125]: time="2023-06-15T17:17:57.893560136Z" level=warning msg="cleaning up after shim disconnected" id=8295f062314bc4437ebb299a564f48dabe3481aff19b17db375a28569efb60e8 namespace=moby
	Jun 15 17:17:57 image-116000 dockerd[1125]: time="2023-06-15T17:17:57.893578344Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	0bcd7bb9f59b1       ab3683b584ae5       8 seconds ago       Running             kube-controller-manager   0                   e11fa938eb232
	d62dad880ab7e       bcb9e554eaab6       8 seconds ago       Running             kube-scheduler            0                   e14f6bb213fbd
	87e3f0e986a50       39dfb036b0986       8 seconds ago       Running             kube-apiserver            0                   7252b1bbeda7f
	ddfa2f18debe1       24bc64e911039       8 seconds ago       Running             etcd                      0                   cb82bb3385d3c
	
	* 
	* ==> describe nodes <==
	* Name:               image-116000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=image-116000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1814abc5e40384accb8747bfb7e33027343c9627
	                    minikube.k8s.io/name=image-116000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_15T10_17_55_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Jun 2023 17:17:52 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  image-116000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Jun 2023 17:17:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Jun 2023 17:17:54 +0000   Thu, 15 Jun 2023 17:17:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Jun 2023 17:17:54 +0000   Thu, 15 Jun 2023 17:17:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Jun 2023 17:17:54 +0000   Thu, 15 Jun 2023 17:17:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 15 Jun 2023 17:17:54 +0000   Thu, 15 Jun 2023 17:17:51 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    image-116000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905012Ki
	  pods:               110
	System Info:
	  Machine ID:                 44877c64bf074545bb31130eebfa2545
	  System UUID:                44877c64bf074545bb31130eebfa2545
	  Boot ID:                    1e745ac7-fb1d-4397-a970-7add736df60e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.3
	  Kube-Proxy Version:         v1.27.3
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-image-116000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4s
	  kube-system                 kube-apiserver-image-116000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kube-controller-manager-image-116000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kube-scheduler-image-116000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (2%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 4s    kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  4s    kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4s    kubelet  Node image-116000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4s    kubelet  Node image-116000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4s    kubelet  Node image-116000 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [Jun15 17:17] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.635098] EINJ: EINJ table not found.
	[  +0.515561] systemd-fstab-generator[116]: Ignoring "noauto" for root device
	[  +0.043847] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000841] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +6.194261] systemd-fstab-generator[494]: Ignoring "noauto" for root device
	[  +0.079783] systemd-fstab-generator[505]: Ignoring "noauto" for root device
	[  +0.441142] systemd-fstab-generator[682]: Ignoring "noauto" for root device
	[  +0.193515] systemd-fstab-generator[749]: Ignoring "noauto" for root device
	[  +0.077766] systemd-fstab-generator[760]: Ignoring "noauto" for root device
	[  +0.080503] systemd-fstab-generator[773]: Ignoring "noauto" for root device
	[  +1.237618] systemd-fstab-generator[932]: Ignoring "noauto" for root device
	[  +0.074750] systemd-fstab-generator[943]: Ignoring "noauto" for root device
	[  +0.074294] systemd-fstab-generator[954]: Ignoring "noauto" for root device
	[  +0.083144] systemd-fstab-generator[965]: Ignoring "noauto" for root device
	[  +0.086275] systemd-fstab-generator[1007]: Ignoring "noauto" for root device
	[  +2.508252] systemd-fstab-generator[1110]: Ignoring "noauto" for root device
	[  +1.461525] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.231093] systemd-fstab-generator[1440]: Ignoring "noauto" for root device
	[  +4.638184] systemd-fstab-generator[2284]: Ignoring "noauto" for root device
	[  +3.231015] kauditd_printk_skb: 41 callbacks suppressed
	
	* 
	* ==> etcd [ddfa2f18debe] <==
	* {"level":"info","ts":"2023-06-15T17:17:51.170Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856)"}
	{"level":"info","ts":"2023-06-15T17:17:51.170Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","added-peer-id":"58de0efec1d86300","added-peer-peer-urls":["https://192.168.105.5:2380"]}
	{"level":"info","ts":"2023-06-15T17:17:51.171Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-06-15T17:17:51.171Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-06-15T17:17:51.171Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-06-15T17:17:51.171Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"58de0efec1d86300","initial-advertise-peer-urls":["https://192.168.105.5:2380"],"listen-peer-urls":["https://192.168.105.5:2380"],"advertise-client-urls":["https://192.168.105.5:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.5:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-06-15T17:17:51.171Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-06-15T17:17:51.450Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 is starting a new election at term 1"}
	{"level":"info","ts":"2023-06-15T17:17:51.450Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-06-15T17:17:51.450Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgPreVoteResp from 58de0efec1d86300 at term 1"}
	{"level":"info","ts":"2023-06-15T17:17:51.450Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became candidate at term 2"}
	{"level":"info","ts":"2023-06-15T17:17:51.450Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-06-15T17:17:51.450Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 2"}
	{"level":"info","ts":"2023-06-15T17:17:51.450Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-06-15T17:17:51.458Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-15T17:17:51.462Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:image-116000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-15T17:17:51.466Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-15T17:17:51.467Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-15T17:17:51.467Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-15T17:17:51.467Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.5:2379"}
	{"level":"info","ts":"2023-06-15T17:17:51.467Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-15T17:17:51.467Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-15T17:17:51.468Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-15T17:17:51.468Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-15T17:17:51.468Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  17:17:58 up 0 min,  0 users,  load average: 0.21, 0.05, 0.02
	Linux image-116000 5.10.57 #1 SMP PREEMPT Wed Jun 14 05:08:37 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [87e3f0e986a5] <==
	* I0615 17:17:52.304802       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0615 17:17:52.305017       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0615 17:17:52.305415       1 controller.go:624] quota admission added evaluator for: namespaces
	I0615 17:17:52.305428       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0615 17:17:52.308646       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0615 17:17:52.327622       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0615 17:17:52.343148       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0615 17:17:52.343249       1 aggregator.go:152] initial CRD sync complete...
	I0615 17:17:52.343261       1 autoregister_controller.go:141] Starting autoregister controller
	I0615 17:17:52.343267       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0615 17:17:52.343284       1 cache.go:39] Caches are synced for autoregister controller
	I0615 17:17:53.058129       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0615 17:17:53.209018       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0615 17:17:53.211506       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0615 17:17:53.211584       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0615 17:17:53.349186       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0615 17:17:53.362877       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0615 17:17:53.484682       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0615 17:17:53.487722       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0615 17:17:53.488124       1 controller.go:624] quota admission added evaluator for: endpoints
	I0615 17:17:53.489680       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0615 17:17:54.242010       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0615 17:17:54.634555       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0615 17:17:54.639658       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0615 17:17:54.645727       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [0bcd7bb9f59b] <==
	* I0615 17:17:55.189244       1 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller
	I0615 17:17:55.189292       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0615 17:17:55.339679       1 controllermanager.go:638] "Started controller" controller="pvc-protection"
	I0615 17:17:55.339701       1 pvc_protection_controller.go:102] "Starting PVC protection controller"
	I0615 17:17:55.339705       1 shared_informer.go:311] Waiting for caches to sync for PVC protection
	I0615 17:17:55.489070       1 controllermanager.go:638] "Started controller" controller="ephemeral-volume"
	I0615 17:17:55.489110       1 controller.go:169] "Starting ephemeral volume controller"
	I0615 17:17:55.489115       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I0615 17:17:55.638975       1 controllermanager.go:638] "Started controller" controller="statefulset"
	I0615 17:17:55.639003       1 core.go:228] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes."
	I0615 17:17:55.639007       1 controllermanager.go:616] "Warning: skipping controller" controller="route"
	I0615 17:17:55.639055       1 stateful_set.go:161] "Starting stateful set controller"
	I0615 17:17:55.639059       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	E0615 17:17:55.788666       1 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
	I0615 17:17:55.788680       1 controllermanager.go:616] "Warning: skipping controller" controller="cloud-node-lifecycle"
	I0615 17:17:55.939326       1 controllermanager.go:638] "Started controller" controller="persistentvolume-expander"
	I0615 17:17:55.939409       1 expand_controller.go:339] "Starting expand controller"
	I0615 17:17:55.939415       1 shared_informer.go:311] Waiting for caches to sync for expand
	I0615 17:17:56.087981       1 controllermanager.go:638] "Started controller" controller="root-ca-cert-publisher"
	I0615 17:17:56.088004       1 publisher.go:101] Starting root CA certificate configmap publisher
	I0615 17:17:56.088008       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0615 17:17:56.238921       1 controllermanager.go:638] "Started controller" controller="tokencleaner"
	I0615 17:17:56.238953       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0615 17:17:56.238957       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0615 17:17:56.238960       1 shared_informer.go:318] Caches are synced for token_cleaner
	
	* 
	* ==> kube-scheduler [d62dad880ab7] <==
	* W0615 17:17:52.290914       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0615 17:17:52.290967       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0615 17:17:52.290927       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0615 17:17:52.290971       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0615 17:17:52.290987       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0615 17:17:52.290990       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0615 17:17:52.291003       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0615 17:17:52.291007       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0615 17:17:52.291019       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0615 17:17:52.291023       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0615 17:17:52.291035       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0615 17:17:52.291038       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0615 17:17:52.291058       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0615 17:17:52.291066       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0615 17:17:52.290869       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0615 17:17:52.291071       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0615 17:17:52.291114       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0615 17:17:52.291127       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0615 17:17:53.169468       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0615 17:17:53.169490       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0615 17:17:53.178693       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0615 17:17:53.178713       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0615 17:17:53.221809       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0615 17:17:53.221826       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0615 17:17:53.881223       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-06-15 17:17:35 UTC, ends at Thu 2023-06-15 17:17:58 UTC. --
	Jun 15 17:17:54 image-116000 kubelet[2297]: I0615 17:17:54.788658    2297 kubelet_node_status.go:108] "Node was previously registered" node="image-116000"
	Jun 15 17:17:54 image-116000 kubelet[2297]: I0615 17:17:54.788747    2297 kubelet_node_status.go:73] "Successfully registered node" node="image-116000"
	Jun 15 17:17:54 image-116000 kubelet[2297]: I0615 17:17:54.808453    2297 topology_manager.go:212] "Topology Admit Handler"
	Jun 15 17:17:54 image-116000 kubelet[2297]: I0615 17:17:54.808531    2297 topology_manager.go:212] "Topology Admit Handler"
	Jun 15 17:17:54 image-116000 kubelet[2297]: I0615 17:17:54.808548    2297 topology_manager.go:212] "Topology Admit Handler"
	Jun 15 17:17:54 image-116000 kubelet[2297]: I0615 17:17:54.808584    2297 topology_manager.go:212] "Topology Admit Handler"
	Jun 15 17:17:54 image-116000 kubelet[2297]: I0615 17:17:54.883201    2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1c8811dc33eeeb479f31b75f23d48e5b-k8s-certs\") pod \"kube-controller-manager-image-116000\" (UID: \"1c8811dc33eeeb479f31b75f23d48e5b\") " pod="kube-system/kube-controller-manager-image-116000"
	Jun 15 17:17:54 image-116000 kubelet[2297]: I0615 17:17:54.883224    2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1c8811dc33eeeb479f31b75f23d48e5b-kubeconfig\") pod \"kube-controller-manager-image-116000\" (UID: \"1c8811dc33eeeb479f31b75f23d48e5b\") " pod="kube-system/kube-controller-manager-image-116000"
	Jun 15 17:17:54 image-116000 kubelet[2297]: I0615 17:17:54.883234    2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/61e3f36ec6f2b05b144a80d079e5b88d-etcd-certs\") pod \"etcd-image-116000\" (UID: \"61e3f36ec6f2b05b144a80d079e5b88d\") " pod="kube-system/etcd-image-116000"
	Jun 15 17:17:54 image-116000 kubelet[2297]: I0615 17:17:54.883244    2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/61e3f36ec6f2b05b144a80d079e5b88d-etcd-data\") pod \"etcd-image-116000\" (UID: \"61e3f36ec6f2b05b144a80d079e5b88d\") " pod="kube-system/etcd-image-116000"
	Jun 15 17:17:54 image-116000 kubelet[2297]: I0615 17:17:54.883254    2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b43cffb7ef922e51a9b5116641de73ec-usr-share-ca-certificates\") pod \"kube-apiserver-image-116000\" (UID: \"b43cffb7ef922e51a9b5116641de73ec\") " pod="kube-system/kube-apiserver-image-116000"
	Jun 15 17:17:54 image-116000 kubelet[2297]: I0615 17:17:54.883262    2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1c8811dc33eeeb479f31b75f23d48e5b-flexvolume-dir\") pod \"kube-controller-manager-image-116000\" (UID: \"1c8811dc33eeeb479f31b75f23d48e5b\") " pod="kube-system/kube-controller-manager-image-116000"
	Jun 15 17:17:54 image-116000 kubelet[2297]: I0615 17:17:54.883272    2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1c8811dc33eeeb479f31b75f23d48e5b-usr-share-ca-certificates\") pod \"kube-controller-manager-image-116000\" (UID: \"1c8811dc33eeeb479f31b75f23d48e5b\") " pod="kube-system/kube-controller-manager-image-116000"
	Jun 15 17:17:54 image-116000 kubelet[2297]: I0615 17:17:54.883281    2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b91cc82349f6cc819f92c1eeabe35c7-kubeconfig\") pod \"kube-scheduler-image-116000\" (UID: \"4b91cc82349f6cc819f92c1eeabe35c7\") " pod="kube-system/kube-scheduler-image-116000"
	Jun 15 17:17:54 image-116000 kubelet[2297]: I0615 17:17:54.883290    2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b43cffb7ef922e51a9b5116641de73ec-ca-certs\") pod \"kube-apiserver-image-116000\" (UID: \"b43cffb7ef922e51a9b5116641de73ec\") " pod="kube-system/kube-apiserver-image-116000"
	Jun 15 17:17:54 image-116000 kubelet[2297]: I0615 17:17:54.883300    2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b43cffb7ef922e51a9b5116641de73ec-k8s-certs\") pod \"kube-apiserver-image-116000\" (UID: \"b43cffb7ef922e51a9b5116641de73ec\") " pod="kube-system/kube-apiserver-image-116000"
	Jun 15 17:17:54 image-116000 kubelet[2297]: I0615 17:17:54.883309    2297 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1c8811dc33eeeb479f31b75f23d48e5b-ca-certs\") pod \"kube-controller-manager-image-116000\" (UID: \"1c8811dc33eeeb479f31b75f23d48e5b\") " pod="kube-system/kube-controller-manager-image-116000"
	Jun 15 17:17:55 image-116000 kubelet[2297]: I0615 17:17:55.666226    2297 apiserver.go:52] "Watching apiserver"
	Jun 15 17:17:55 image-116000 kubelet[2297]: I0615 17:17:55.680169    2297 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	Jun 15 17:17:55 image-116000 kubelet[2297]: I0615 17:17:55.686979    2297 reconciler.go:41] "Reconciler: start to sync state"
	Jun 15 17:17:55 image-116000 kubelet[2297]: E0615 17:17:55.733358    2297 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-image-116000\" already exists" pod="kube-system/kube-apiserver-image-116000"
	Jun 15 17:17:55 image-116000 kubelet[2297]: I0615 17:17:55.742828    2297 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-image-116000" podStartSLOduration=1.7428040930000002 podCreationTimestamp="2023-06-15 17:17:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-15 17:17:55.742661051 +0000 UTC m=+1.119217377" watchObservedRunningTime="2023-06-15 17:17:55.742804093 +0000 UTC m=+1.119360419"
	Jun 15 17:17:55 image-116000 kubelet[2297]: I0615 17:17:55.750294    2297 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-image-116000" podStartSLOduration=1.750273926 podCreationTimestamp="2023-06-15 17:17:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-15 17:17:55.746809093 +0000 UTC m=+1.123365377" watchObservedRunningTime="2023-06-15 17:17:55.750273926 +0000 UTC m=+1.126830252"
	Jun 15 17:17:55 image-116000 kubelet[2297]: I0615 17:17:55.753495    2297 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-image-116000" podStartSLOduration=1.7534812180000001 podCreationTimestamp="2023-06-15 17:17:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-15 17:17:55.750364218 +0000 UTC m=+1.126920544" watchObservedRunningTime="2023-06-15 17:17:55.753481218 +0000 UTC m=+1.130037502"
	Jun 15 17:17:55 image-116000 kubelet[2297]: I0615 17:17:55.757407    2297 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-image-116000" podStartSLOduration=1.757392593 podCreationTimestamp="2023-06-15 17:17:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-15 17:17:55.753591968 +0000 UTC m=+1.130148294" watchObservedRunningTime="2023-06-15 17:17:55.757392593 +0000 UTC m=+1.133948919"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p image-116000 -n image-116000
helpers_test.go:261: (dbg) Run:  kubectl --context image-116000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestImageBuild/serial/BuildWithBuildArg]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context image-116000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context image-116000 describe pod storage-provisioner: exit status 1 (39.760375ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context image-116000 describe pod storage-provisioner: exit status 1
--- FAIL: TestImageBuild/serial/BuildWithBuildArg (1.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (52.19s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-422000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-422000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (9.118106959s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-422000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-422000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4fadecaa-7f65-44a6-a1af-12db5565d646] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4fadecaa-7f65-44a6-a1af-12db5565d646] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.011679292s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-422000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-422000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-422000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.105.6
E0615 10:20:00.481031    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt: no such file or directory
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.6: exit status 1 (15.038084292s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.6" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-422000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-422000 addons disable ingress-dns --alsologtostderr -v=1: (9.681483334s)
addons_test.go:287: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-422000 addons disable ingress --alsologtostderr -v=1
E0615 10:20:28.188720    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt: no such file or directory
addons_test.go:287: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-422000 addons disable ingress --alsologtostderr -v=1: (7.049004584s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-422000 -n ingress-addon-legacy-422000
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-422000 logs -n 25
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                           Args                           |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-822000 image ls                               | functional-822000           | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	| image   | functional-822000 image load                             | functional-822000           | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	|         | /Users/jenkins/workspace/addon-resizer-save.tar          |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-822000 image ls                               | functional-822000           | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	| image   | functional-822000 image save --daemon                    | functional-822000           | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	|         | gcr.io/google-containers/addon-resizer:functional-822000 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-822000                                        | functional-822000           | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	|         | image ls --format short                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-822000                                        | functional-822000           | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	|         | image ls --format yaml                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| ssh     | functional-822000 ssh pgrep                              | functional-822000           | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT |                     |
	|         | buildkitd                                                |                             |         |         |                     |                     |
	| image   | functional-822000                                        | functional-822000           | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	|         | image ls --format json                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-822000 image build -t                         | functional-822000           | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	|         | localhost/my-image:functional-822000                     |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                         |                             |         |         |                     |                     |
	| image   | functional-822000                                        | functional-822000           | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	|         | image ls --format table                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-822000 image ls                               | functional-822000           | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	| delete  | -p functional-822000                                     | functional-822000           | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	| start   | -p image-116000 --driver=qemu2                           | image-116000                | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	|         |                                                          |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                      | image-116000                | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	|         | ./testdata/image-build/test-normal                       |                             |         |         |                     |                     |
	|         | -p image-116000                                          |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                      | image-116000                | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	|         | --build-opt=build-arg=ENV_A=test_env_str                 |                             |         |         |                     |                     |
	|         | --build-opt=no-cache                                     |                             |         |         |                     |                     |
	|         | ./testdata/image-build/test-arg -p                       |                             |         |         |                     |                     |
	|         | image-116000                                             |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                      | image-116000                | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	|         | ./testdata/image-build/test-normal                       |                             |         |         |                     |                     |
	|         | --build-opt=no-cache -p                                  |                             |         |         |                     |                     |
	|         | image-116000                                             |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                      | image-116000                | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	|         | -f inner/Dockerfile                                      |                             |         |         |                     |                     |
	|         | ./testdata/image-build/test-f                            |                             |         |         |                     |                     |
	|         | -p image-116000                                          |                             |         |         |                     |                     |
	| delete  | -p image-116000                                          | image-116000                | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:17 PDT |
	| start   | -p ingress-addon-legacy-422000                           | ingress-addon-legacy-422000 | jenkins | v1.30.1 | 15 Jun 23 10:17 PDT | 15 Jun 23 10:19 PDT |
	|         | --kubernetes-version=v1.18.20                            |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                   |                             |         |         |                     |                     |
	|         | --driver=qemu2                                           |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-422000                              | ingress-addon-legacy-422000 | jenkins | v1.30.1 | 15 Jun 23 10:19 PDT | 15 Jun 23 10:19 PDT |
	|         | addons enable ingress                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-422000                              | ingress-addon-legacy-422000 | jenkins | v1.30.1 | 15 Jun 23 10:19 PDT | 15 Jun 23 10:19 PDT |
	|         | addons enable ingress-dns                                |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                   |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-422000                              | ingress-addon-legacy-422000 | jenkins | v1.30.1 | 15 Jun 23 10:19 PDT | 15 Jun 23 10:19 PDT |
	|         | ssh curl -s http://127.0.0.1/                            |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                             |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-422000 ip                           | ingress-addon-legacy-422000 | jenkins | v1.30.1 | 15 Jun 23 10:19 PDT | 15 Jun 23 10:19 PDT |
	| addons  | ingress-addon-legacy-422000                              | ingress-addon-legacy-422000 | jenkins | v1.30.1 | 15 Jun 23 10:20 PDT | 15 Jun 23 10:20 PDT |
	|         | addons disable ingress-dns                               |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-422000                              | ingress-addon-legacy-422000 | jenkins | v1.30.1 | 15 Jun 23 10:20 PDT | 15 Jun 23 10:20 PDT |
	|         | addons disable ingress                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                   |                             |         |         |                     |                     |
	|---------|----------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/15 10:17:59
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0615 10:17:59.325420    3179 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:17:59.325555    3179 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:17:59.325557    3179 out.go:309] Setting ErrFile to fd 2...
	I0615 10:17:59.325560    3179 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:17:59.325632    3179 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:17:59.326686    3179 out.go:303] Setting JSON to false
	I0615 10:17:59.342099    3179 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2850,"bootTime":1686846629,"procs":442,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:17:59.342151    3179 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:17:59.346147    3179 out.go:177] * [ingress-addon-legacy-422000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:17:59.353163    3179 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:17:59.353175    3179 notify.go:220] Checking for updates...
	I0615 10:17:59.357183    3179 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:17:59.360176    3179 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:17:59.363196    3179 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:17:59.366186    3179 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:17:59.369213    3179 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:17:59.372327    3179 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:17:59.376174    3179 out.go:177] * Using the qemu2 driver based on user configuration
	I0615 10:17:59.383090    3179 start.go:297] selected driver: qemu2
	I0615 10:17:59.383096    3179 start.go:884] validating driver "qemu2" against <nil>
	I0615 10:17:59.383103    3179 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:17:59.385054    3179 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0615 10:17:59.388119    3179 out.go:177] * Automatically selected the socket_vmnet network
	I0615 10:17:59.391258    3179 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 10:17:59.391289    3179 cni.go:84] Creating CNI manager for ""
	I0615 10:17:59.391296    3179 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0615 10:17:59.391307    3179 start_flags.go:319] config:
	{Name:ingress-addon-legacy-422000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-422000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
:}
	I0615 10:17:59.391402    3179 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:17:59.395138    3179 out.go:177] * Starting control plane node ingress-addon-legacy-422000 in cluster ingress-addon-legacy-422000
	I0615 10:17:59.403011    3179 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0615 10:17:59.606645    3179 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0615 10:17:59.606711    3179 cache.go:57] Caching tarball of preloaded images
	I0615 10:17:59.607411    3179 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0615 10:17:59.616387    3179 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0615 10:17:59.620369    3179 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0615 10:17:59.838456    3179 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0615 10:18:16.914285    3179 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0615 10:18:16.914431    3179 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0615 10:18:17.660628    3179 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0615 10:18:17.660812    3179 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/config.json ...
	I0615 10:18:17.660838    3179 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/config.json: {Name:mk32fd03a559a4359bc9eab10ed845d185b6724d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:18:17.661091    3179 start.go:365] acquiring machines lock for ingress-addon-legacy-422000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:18:17.661120    3179 start.go:369] acquired machines lock for "ingress-addon-legacy-422000" in 20.292µs
	I0615 10:18:17.661128    3179 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-422000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 K
ubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-422000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:18:17.661167    3179 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:18:17.666214    3179 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0615 10:18:17.680719    3179 start.go:159] libmachine.API.Create for "ingress-addon-legacy-422000" (driver="qemu2")
	I0615 10:18:17.680740    3179 client.go:168] LocalClient.Create starting
	I0615 10:18:17.680838    3179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:18:17.680859    3179 main.go:141] libmachine: Decoding PEM data...
	I0615 10:18:17.680869    3179 main.go:141] libmachine: Parsing certificate...
	I0615 10:18:17.680911    3179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:18:17.680925    3179 main.go:141] libmachine: Decoding PEM data...
	I0615 10:18:17.680936    3179 main.go:141] libmachine: Parsing certificate...
	I0615 10:18:17.681267    3179 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:18:18.079932    3179 main.go:141] libmachine: Creating SSH key...
	I0615 10:18:18.229145    3179 main.go:141] libmachine: Creating Disk image...
	I0615 10:18:18.229155    3179 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:18:18.229314    3179 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/ingress-addon-legacy-422000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/ingress-addon-legacy-422000/disk.qcow2
	I0615 10:18:18.238307    3179 main.go:141] libmachine: STDOUT: 
	I0615 10:18:18.238320    3179 main.go:141] libmachine: STDERR: 
	I0615 10:18:18.238373    3179 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/ingress-addon-legacy-422000/disk.qcow2 +20000M
	I0615 10:18:18.245514    3179 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:18:18.245527    3179 main.go:141] libmachine: STDERR: 
	I0615 10:18:18.245539    3179 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/ingress-addon-legacy-422000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/ingress-addon-legacy-422000/disk.qcow2
	I0615 10:18:18.245546    3179 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:18:18.245587    3179 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/ingress-addon-legacy-422000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/ingress-addon-legacy-422000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/ingress-addon-legacy-422000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:69:e1:f1:f0:ad -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/ingress-addon-legacy-422000/disk.qcow2
	I0615 10:18:18.279424    3179 main.go:141] libmachine: STDOUT: 
	I0615 10:18:18.279454    3179 main.go:141] libmachine: STDERR: 
	I0615 10:18:18.279458    3179 main.go:141] libmachine: Attempt 0
	I0615 10:18:18.279479    3179 main.go:141] libmachine: Searching for ee:69:e1:f1:f0:ad in /var/db/dhcpd_leases ...
	I0615 10:18:18.279541    3179 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0615 10:18:18.279560    3179 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:4a:28:2d:d8:f3:b7 ID:1,4a:28:2d:d8:f3:b7 Lease:0x648c992f}
	I0615 10:18:18.279568    3179 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ee:b2:d2:4f:3:e3 ID:1,ee:b2:d2:4f:3:e3 Lease:0x648c9867}
	I0615 10:18:18.279573    3179 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ae:3:40:f2:c1:7 ID:1,ae:3:40:f2:c1:7 Lease:0x648b46da}
	I0615 10:18:18.279584    3179 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:25:cc:f:2e:6f ID:1,1a:25:cc:f:2e:6f Lease:0x648c8ed9}
	I0615 10:18:20.281807    3179 main.go:141] libmachine: Attempt 1
	I0615 10:18:20.282017    3179 main.go:141] libmachine: Searching for ee:69:e1:f1:f0:ad in /var/db/dhcpd_leases ...
	I0615 10:18:20.282357    3179 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0615 10:18:20.282406    3179 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:4a:28:2d:d8:f3:b7 ID:1,4a:28:2d:d8:f3:b7 Lease:0x648c992f}
	I0615 10:18:20.282477    3179 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ee:b2:d2:4f:3:e3 ID:1,ee:b2:d2:4f:3:e3 Lease:0x648c9867}
	I0615 10:18:20.282507    3179 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ae:3:40:f2:c1:7 ID:1,ae:3:40:f2:c1:7 Lease:0x648b46da}
	I0615 10:18:20.282534    3179 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:25:cc:f:2e:6f ID:1,1a:25:cc:f:2e:6f Lease:0x648c8ed9}
	I0615 10:18:22.283763    3179 main.go:141] libmachine: Attempt 2
	I0615 10:18:22.283800    3179 main.go:141] libmachine: Searching for ee:69:e1:f1:f0:ad in /var/db/dhcpd_leases ...
	I0615 10:18:22.283958    3179 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0615 10:18:22.283970    3179 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:4a:28:2d:d8:f3:b7 ID:1,4a:28:2d:d8:f3:b7 Lease:0x648c992f}
	I0615 10:18:22.283976    3179 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ee:b2:d2:4f:3:e3 ID:1,ee:b2:d2:4f:3:e3 Lease:0x648c9867}
	I0615 10:18:22.283981    3179 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ae:3:40:f2:c1:7 ID:1,ae:3:40:f2:c1:7 Lease:0x648b46da}
	I0615 10:18:22.283986    3179 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:25:cc:f:2e:6f ID:1,1a:25:cc:f:2e:6f Lease:0x648c8ed9}
	I0615 10:18:24.285901    3179 main.go:141] libmachine: Attempt 3
	I0615 10:18:24.285926    3179 main.go:141] libmachine: Searching for ee:69:e1:f1:f0:ad in /var/db/dhcpd_leases ...
	I0615 10:18:24.286093    3179 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0615 10:18:24.286104    3179 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:4a:28:2d:d8:f3:b7 ID:1,4a:28:2d:d8:f3:b7 Lease:0x648c992f}
	I0615 10:18:24.286110    3179 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ee:b2:d2:4f:3:e3 ID:1,ee:b2:d2:4f:3:e3 Lease:0x648c9867}
	I0615 10:18:24.286115    3179 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ae:3:40:f2:c1:7 ID:1,ae:3:40:f2:c1:7 Lease:0x648b46da}
	I0615 10:18:24.286120    3179 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:25:cc:f:2e:6f ID:1,1a:25:cc:f:2e:6f Lease:0x648c8ed9}
	I0615 10:18:26.288039    3179 main.go:141] libmachine: Attempt 4
	I0615 10:18:26.288058    3179 main.go:141] libmachine: Searching for ee:69:e1:f1:f0:ad in /var/db/dhcpd_leases ...
	I0615 10:18:26.288101    3179 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0615 10:18:26.288109    3179 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:4a:28:2d:d8:f3:b7 ID:1,4a:28:2d:d8:f3:b7 Lease:0x648c992f}
	I0615 10:18:26.288115    3179 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ee:b2:d2:4f:3:e3 ID:1,ee:b2:d2:4f:3:e3 Lease:0x648c9867}
	I0615 10:18:26.288122    3179 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ae:3:40:f2:c1:7 ID:1,ae:3:40:f2:c1:7 Lease:0x648b46da}
	I0615 10:18:26.288127    3179 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:25:cc:f:2e:6f ID:1,1a:25:cc:f:2e:6f Lease:0x648c8ed9}
	I0615 10:18:28.289041    3179 main.go:141] libmachine: Attempt 5
	I0615 10:18:28.289062    3179 main.go:141] libmachine: Searching for ee:69:e1:f1:f0:ad in /var/db/dhcpd_leases ...
	I0615 10:18:28.289138    3179 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0615 10:18:28.289149    3179 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:4a:28:2d:d8:f3:b7 ID:1,4a:28:2d:d8:f3:b7 Lease:0x648c992f}
	I0615 10:18:28.289155    3179 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:ee:b2:d2:4f:3:e3 ID:1,ee:b2:d2:4f:3:e3 Lease:0x648c9867}
	I0615 10:18:28.289160    3179 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ae:3:40:f2:c1:7 ID:1,ae:3:40:f2:c1:7 Lease:0x648b46da}
	I0615 10:18:28.289166    3179 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:1a:25:cc:f:2e:6f ID:1,1a:25:cc:f:2e:6f Lease:0x648c8ed9}
	I0615 10:18:30.291143    3179 main.go:141] libmachine: Attempt 6
	I0615 10:18:30.291188    3179 main.go:141] libmachine: Searching for ee:69:e1:f1:f0:ad in /var/db/dhcpd_leases ...
	I0615 10:18:30.291286    3179 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0615 10:18:30.291299    3179 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:ee:69:e1:f1:f0:ad ID:1,ee:69:e1:f1:f0:ad Lease:0x648c9965}
	I0615 10:18:30.291304    3179 main.go:141] libmachine: Found match: ee:69:e1:f1:f0:ad
	I0615 10:18:30.291314    3179 main.go:141] libmachine: IP: 192.168.105.6
	I0615 10:18:30.291320    3179 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I0615 10:18:31.298056    3179 machine.go:88] provisioning docker machine ...
	I0615 10:18:31.298078    3179 buildroot.go:166] provisioning hostname "ingress-addon-legacy-422000"
	I0615 10:18:31.298140    3179 main.go:141] libmachine: Using SSH client type: native
	I0615 10:18:31.298406    3179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028b4e20] 0x1028b7880 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0615 10:18:31.298415    3179 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-422000 && echo "ingress-addon-legacy-422000" | sudo tee /etc/hostname
	I0615 10:18:31.358571    3179 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-422000
	
	I0615 10:18:31.358630    3179 main.go:141] libmachine: Using SSH client type: native
	I0615 10:18:31.358878    3179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028b4e20] 0x1028b7880 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0615 10:18:31.358887    3179 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-422000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-422000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-422000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0615 10:18:31.416378    3179 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0615 10:18:31.416390    3179 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/16718-868/.minikube CaCertPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/16718-868/.minikube}
	I0615 10:18:31.416397    3179 buildroot.go:174] setting up certificates
	I0615 10:18:31.416403    3179 provision.go:83] configureAuth start
	I0615 10:18:31.416407    3179 provision.go:138] copyHostCerts
	I0615 10:18:31.416435    3179 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/16718-868/.minikube/ca.pem
	I0615 10:18:31.416491    3179 exec_runner.go:144] found /Users/jenkins/minikube-integration/16718-868/.minikube/ca.pem, removing ...
	I0615 10:18:31.416496    3179 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16718-868/.minikube/ca.pem
	I0615 10:18:31.416624    3179 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/16718-868/.minikube/ca.pem (1078 bytes)
	I0615 10:18:31.416793    3179 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/16718-868/.minikube/cert.pem
	I0615 10:18:31.416816    3179 exec_runner.go:144] found /Users/jenkins/minikube-integration/16718-868/.minikube/cert.pem, removing ...
	I0615 10:18:31.416819    3179 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16718-868/.minikube/cert.pem
	I0615 10:18:31.416891    3179 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/16718-868/.minikube/cert.pem (1123 bytes)
	I0615 10:18:31.416967    3179 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/16718-868/.minikube/key.pem
	I0615 10:18:31.416994    3179 exec_runner.go:144] found /Users/jenkins/minikube-integration/16718-868/.minikube/key.pem, removing ...
	I0615 10:18:31.416997    3179 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/16718-868/.minikube/key.pem
	I0615 10:18:31.417038    3179 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/16718-868/.minikube/key.pem (1679 bytes)
	I0615 10:18:31.417123    3179 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/16718-868/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-422000 san=[192.168.105.6 192.168.105.6 localhost 127.0.0.1 minikube ingress-addon-legacy-422000]
	I0615 10:18:31.547433    3179 provision.go:172] copyRemoteCerts
	I0615 10:18:31.547489    3179 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0615 10:18:31.547500    3179 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/ingress-addon-legacy-422000/id_rsa Username:docker}
	I0615 10:18:31.581338    3179 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0615 10:18:31.581382    3179 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0615 10:18:31.588105    3179 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16718-868/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0615 10:18:31.588153    3179 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0615 10:18:31.594834    3179 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16718-868/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0615 10:18:31.594876    3179 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0615 10:18:31.602279    3179 provision.go:86] duration metric: configureAuth took 185.880125ms
	I0615 10:18:31.602288    3179 buildroot.go:189] setting minikube options for container-runtime
	I0615 10:18:31.602398    3179 config.go:182] Loaded profile config "ingress-addon-legacy-422000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0615 10:18:31.602443    3179 main.go:141] libmachine: Using SSH client type: native
	I0615 10:18:31.602661    3179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028b4e20] 0x1028b7880 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0615 10:18:31.602669    3179 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0615 10:18:31.656954    3179 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0615 10:18:31.656963    3179 buildroot.go:70] root file system type: tmpfs
	I0615 10:18:31.657018    3179 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0615 10:18:31.657070    3179 main.go:141] libmachine: Using SSH client type: native
	I0615 10:18:31.657294    3179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028b4e20] 0x1028b7880 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0615 10:18:31.657327    3179 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0615 10:18:31.719813    3179 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0615 10:18:31.719856    3179 main.go:141] libmachine: Using SSH client type: native
	I0615 10:18:31.720117    3179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028b4e20] 0x1028b7880 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0615 10:18:31.720127    3179 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0615 10:18:32.090924    3179 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0615 10:18:32.090937    3179 machine.go:91] provisioned docker machine in 792.909833ms
	I0615 10:18:32.090943    3179 client.go:171] LocalClient.Create took 14.411178875s
	I0615 10:18:32.090958    3179 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-422000" took 14.411219375s
	I0615 10:18:32.090963    3179 start.go:300] post-start starting for "ingress-addon-legacy-422000" (driver="qemu2")
	I0615 10:18:32.090970    3179 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0615 10:18:32.091047    3179 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0615 10:18:32.091056    3179 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/ingress-addon-legacy-422000/id_rsa Username:docker}
	I0615 10:18:32.121217    3179 ssh_runner.go:195] Run: cat /etc/os-release
	I0615 10:18:32.122520    3179 info.go:137] Remote host: Buildroot 2021.02.12
	I0615 10:18:32.122525    3179 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16718-868/.minikube/addons for local assets ...
	I0615 10:18:32.122588    3179 filesync.go:126] Scanning /Users/jenkins/minikube-integration/16718-868/.minikube/files for local assets ...
	I0615 10:18:32.122697    3179 filesync.go:149] local asset: /Users/jenkins/minikube-integration/16718-868/.minikube/files/etc/ssl/certs/13132.pem -> 13132.pem in /etc/ssl/certs
	I0615 10:18:32.122701    3179 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16718-868/.minikube/files/etc/ssl/certs/13132.pem -> /etc/ssl/certs/13132.pem
	I0615 10:18:32.122850    3179 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0615 10:18:32.125488    3179 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/files/etc/ssl/certs/13132.pem --> /etc/ssl/certs/13132.pem (1708 bytes)
	I0615 10:18:32.132252    3179 start.go:303] post-start completed in 41.283542ms
	I0615 10:18:32.132637    3179 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/config.json ...
	I0615 10:18:32.132805    3179 start.go:128] duration metric: createHost completed in 14.472618333s
	I0615 10:18:32.132833    3179 main.go:141] libmachine: Using SSH client type: native
	I0615 10:18:32.133049    3179 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028b4e20] 0x1028b7880 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0615 10:18:32.133053    3179 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0615 10:18:32.186674    3179 main.go:141] libmachine: SSH cmd err, output: <nil>: 1686849512.565747001
	
	I0615 10:18:32.186685    3179 fix.go:206] guest clock: 1686849512.565747001
	I0615 10:18:32.186688    3179 fix.go:219] Guest: 2023-06-15 10:18:32.565747001 -0700 PDT Remote: 2023-06-15 10:18:32.132808 -0700 PDT m=+32.830180001 (delta=432.939001ms)
	I0615 10:18:32.186700    3179 fix.go:190] guest clock delta is within tolerance: 432.939001ms
	I0615 10:18:32.186703    3179 start.go:83] releasing machines lock for "ingress-addon-legacy-422000", held for 14.526564416s
	I0615 10:18:32.186985    3179 ssh_runner.go:195] Run: cat /version.json
	I0615 10:18:32.186994    3179 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/ingress-addon-legacy-422000/id_rsa Username:docker}
	I0615 10:18:32.187001    3179 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0615 10:18:32.187016    3179 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/ingress-addon-legacy-422000/id_rsa Username:docker}
	I0615 10:18:32.258750    3179 ssh_runner.go:195] Run: systemctl --version
	I0615 10:18:32.260874    3179 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0615 10:18:32.262608    3179 cni.go:215] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0615 10:18:32.262634    3179 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0615 10:18:32.266098    3179 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0615 10:18:32.271172    3179 cni.go:314] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0615 10:18:32.271179    3179 start.go:466] detecting cgroup driver to use...
	I0615 10:18:32.271239    3179 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0615 10:18:32.278338    3179 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0615 10:18:32.282070    3179 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0615 10:18:32.285474    3179 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0615 10:18:32.285518    3179 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0615 10:18:32.288272    3179 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0615 10:18:32.291052    3179 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0615 10:18:32.294273    3179 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0615 10:18:32.297186    3179 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0615 10:18:32.300177    3179 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0615 10:18:32.302990    3179 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0615 10:18:32.306199    3179 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0615 10:18:32.309136    3179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 10:18:32.389347    3179 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0615 10:18:32.396750    3179 start.go:466] detecting cgroup driver to use...
	I0615 10:18:32.396828    3179 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0615 10:18:32.401953    3179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0615 10:18:32.406836    3179 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0615 10:18:32.413023    3179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0615 10:18:32.417659    3179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0615 10:18:32.422347    3179 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0615 10:18:32.464796    3179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0615 10:18:32.469676    3179 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0615 10:18:32.475132    3179 ssh_runner.go:195] Run: which cri-dockerd
	I0615 10:18:32.476460    3179 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0615 10:18:32.478919    3179 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0615 10:18:32.483922    3179 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0615 10:18:32.559799    3179 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0615 10:18:32.645813    3179 docker.go:535] configuring docker to use "cgroupfs" as cgroup driver...
	I0615 10:18:32.645826    3179 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0615 10:18:32.651581    3179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 10:18:32.727296    3179 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0615 10:18:33.923165    3179 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.195908625s)
	I0615 10:18:33.923242    3179 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0615 10:18:33.935967    3179 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0615 10:18:33.952604    3179 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.2 ...
	I0615 10:18:33.952706    3179 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0615 10:18:33.954277    3179 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0615 10:18:33.958103    3179 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0615 10:18:33.958145    3179 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0615 10:18:33.963428    3179 docker.go:636] Got preloaded images: 
	I0615 10:18:33.963434    3179 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0615 10:18:33.963471    3179 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0615 10:18:33.966666    3179 ssh_runner.go:195] Run: which lz4
	I0615 10:18:33.968038    3179 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0615 10:18:33.968142    3179 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0615 10:18:33.969540    3179 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0615 10:18:33.969556    3179 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0615 10:18:35.692624    3179 docker.go:600] Took 1.724601 seconds to copy over tarball
	I0615 10:18:35.692693    3179 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0615 10:18:36.994772    3179 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.302115416s)
	I0615 10:18:36.994785    3179 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0615 10:18:37.020048    3179 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0615 10:18:37.025973    3179 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0615 10:18:37.032393    3179 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0615 10:18:37.117709    3179 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0615 10:18:38.649993    3179 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.532325667s)
	I0615 10:18:38.650101    3179 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0615 10:18:38.655913    3179 docker.go:636] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0615 10:18:38.655923    3179 docker.go:642] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0615 10:18:38.655927    3179 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0615 10:18:38.670378    3179 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0615 10:18:38.670425    3179 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0615 10:18:38.670481    3179 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0615 10:18:38.670776    3179 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0615 10:18:38.674268    3179 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0615 10:18:38.674319    3179 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0615 10:18:38.674362    3179 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0615 10:18:38.674719    3179 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0615 10:18:38.681279    3179 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0615 10:18:38.681286    3179 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0615 10:18:38.682417    3179 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0615 10:18:38.682425    3179 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0615 10:18:38.682538    3179 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0615 10:18:38.682648    3179 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0615 10:18:38.683025    3179 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0615 10:18:38.683388    3179 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	W0615 10:18:40.045484    3179 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0615 10:18:40.045619    3179 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	W0615 10:18:40.051327    3179 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0615 10:18:40.051439    3179 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0615 10:18:40.051745    3179 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0615 10:18:40.051765    3179 docker.go:316] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0615 10:18:40.051789    3179 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0615 10:18:40.064211    3179 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0615 10:18:40.064232    3179 docker.go:316] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0615 10:18:40.064235    3179 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0615 10:18:40.064278    3179 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0615 10:18:40.070570    3179 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	W0615 10:18:40.093164    3179 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0615 10:18:40.093257    3179 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0615 10:18:40.099550    3179 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0615 10:18:40.099570    3179 docker.go:316] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0615 10:18:40.099619    3179 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0615 10:18:40.105417    3179 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	W0615 10:18:40.296075    3179 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0615 10:18:40.296191    3179 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	W0615 10:18:40.307330    3179 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0615 10:18:40.307438    3179 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0615 10:18:40.312343    3179 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0615 10:18:40.312362    3179 docker.go:316] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0615 10:18:40.312397    3179 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0615 10:18:40.318312    3179 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0615 10:18:40.318333    3179 docker.go:316] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0615 10:18:40.318375    3179 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0615 10:18:40.318865    3179 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I0615 10:18:40.325019    3179 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	W0615 10:18:40.524460    3179 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0615 10:18:40.524609    3179 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0615 10:18:40.531064    3179 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0615 10:18:40.531087    3179 docker.go:316] Removing image: registry.k8s.io/coredns:1.6.7
	I0615 10:18:40.531129    3179 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I0615 10:18:40.537351    3179 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0615 10:18:40.897818    3179 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0615 10:18:40.920931    3179 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0615 10:18:40.920981    3179 docker.go:316] Removing image: registry.k8s.io/pause:3.2
	I0615 10:18:40.921099    3179 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0615 10:18:40.935783    3179 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W0615 10:18:41.160476    3179 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0615 10:18:41.161005    3179 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0615 10:18:41.184239    3179 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0615 10:18:41.184292    3179 docker.go:316] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0615 10:18:41.184426    3179 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0615 10:18:41.208326    3179 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0615 10:18:41.208410    3179 cache_images.go:92] LoadImages completed in 2.552565s
	W0615 10:18:41.208477    3179 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20: no such file or directory
	I0615 10:18:41.208573    3179 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0615 10:18:41.224075    3179 cni.go:84] Creating CNI manager for ""
	I0615 10:18:41.224092    3179 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0615 10:18:41.224104    3179 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0615 10:18:41.224118    3179 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.6 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-422000 NodeName:ingress-addon-legacy-422000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0615 10:18:41.224255    3179 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-422000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0615 10:18:41.224305    3179 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-422000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-422000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0615 10:18:41.224396    3179 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0615 10:18:41.229432    3179 binaries.go:44] Found k8s binaries, skipping transfer
	I0615 10:18:41.229484    3179 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0615 10:18:41.233605    3179 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	I0615 10:18:41.240338    3179 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0615 10:18:41.246019    3179 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2127 bytes)
	I0615 10:18:41.251753    3179 ssh_runner.go:195] Run: grep 192.168.105.6	control-plane.minikube.internal$ /etc/hosts
	I0615 10:18:41.252983    3179 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0615 10:18:41.256549    3179 certs.go:56] Setting up /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000 for IP: 192.168.105.6
	I0615 10:18:41.256558    3179 certs.go:190] acquiring lock for shared ca certs: {Name:mk9ee4d7ca68f2cc32c8609d33f6ce33c43a91d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:18:41.256872    3179 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key
	I0615 10:18:41.257046    3179 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key
	I0615 10:18:41.257073    3179 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/client.key
	I0615 10:18:41.257079    3179 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/client.crt with IP's: []
	I0615 10:18:41.293910    3179 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/client.crt ...
	I0615 10:18:41.293915    3179 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/client.crt: {Name:mkc6f645a042044835f5da02c0b35a0acd8dd471 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:18:41.294114    3179 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/client.key ...
	I0615 10:18:41.294120    3179 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/client.key: {Name:mk202de43331dbde48bcdc2566f97e69ed23dbf1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:18:41.294250    3179 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/apiserver.key.b354f644
	I0615 10:18:41.294259    3179 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/apiserver.crt.b354f644 with IP's: [192.168.105.6 10.96.0.1 127.0.0.1 10.0.0.1]
	I0615 10:18:41.409214    3179 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/apiserver.crt.b354f644 ...
	I0615 10:18:41.409218    3179 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/apiserver.crt.b354f644: {Name:mk8034971d55b0e00c98950b909ff06639c6c595 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:18:41.409365    3179 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/apiserver.key.b354f644 ...
	I0615 10:18:41.409368    3179 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/apiserver.key.b354f644: {Name:mka40e3e6843bdf7a1edc598d20bacc300c07439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:18:41.409478    3179 certs.go:337] copying /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/apiserver.crt.b354f644 -> /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/apiserver.crt
	I0615 10:18:41.409576    3179 certs.go:341] copying /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/apiserver.key.b354f644 -> /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/apiserver.key
	I0615 10:18:41.409667    3179 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/proxy-client.key
	I0615 10:18:41.409674    3179 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/proxy-client.crt with IP's: []
	I0615 10:18:41.606247    3179 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/proxy-client.crt ...
	I0615 10:18:41.606256    3179 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/proxy-client.crt: {Name:mkef488c8a7dba25607053a4ca105b3fa71c9f69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:18:41.606478    3179 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/proxy-client.key ...
	I0615 10:18:41.606482    3179 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/proxy-client.key: {Name:mk3d80146dfaaa4396730aa32331bcf7d2494e00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:18:41.606610    3179 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0615 10:18:41.606627    3179 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0615 10:18:41.606642    3179 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0615 10:18:41.606656    3179 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0615 10:18:41.606670    3179 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0615 10:18:41.606686    3179 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0615 10:18:41.606698    3179 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0615 10:18:41.606709    3179 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0615 10:18:41.606807    3179 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/1313.pem (1338 bytes)
	W0615 10:18:41.607223    3179 certs.go:433] ignoring /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/1313_empty.pem, impossibly tiny 0 bytes
	I0615 10:18:41.607237    3179 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca-key.pem (1679 bytes)
	I0615 10:18:41.607268    3179 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem (1078 bytes)
	I0615 10:18:41.607289    3179 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem (1123 bytes)
	I0615 10:18:41.607314    3179 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/Users/jenkins/minikube-integration/16718-868/.minikube/certs/key.pem (1679 bytes)
	I0615 10:18:41.607371    3179 certs.go:437] found cert: /Users/jenkins/minikube-integration/16718-868/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/16718-868/.minikube/files/etc/ssl/certs/13132.pem (1708 bytes)
	I0615 10:18:41.607403    3179 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16718-868/.minikube/files/etc/ssl/certs/13132.pem -> /usr/share/ca-certificates/13132.pem
	I0615 10:18:41.607414    3179 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0615 10:18:41.607424    3179 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/16718-868/.minikube/certs/1313.pem -> /usr/share/ca-certificates/1313.pem
	I0615 10:18:41.607818    3179 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0615 10:18:41.615995    3179 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0615 10:18:41.622681    3179 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0615 10:18:41.629946    3179 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0615 10:18:41.637414    3179 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0615 10:18:41.644532    3179 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0615 10:18:41.651257    3179 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0615 10:18:41.658295    3179 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0615 10:18:41.665548    3179 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/files/etc/ssl/certs/13132.pem --> /usr/share/ca-certificates/13132.pem (1708 bytes)
	I0615 10:18:41.672766    3179 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0615 10:18:41.679398    3179 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/16718-868/.minikube/certs/1313.pem --> /usr/share/ca-certificates/1313.pem (1338 bytes)
	I0615 10:18:41.686085    3179 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0615 10:18:41.690995    3179 ssh_runner.go:195] Run: openssl version
	I0615 10:18:41.692850    3179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13132.pem && ln -fs /usr/share/ca-certificates/13132.pem /etc/ssl/certs/13132.pem"
	I0615 10:18:41.695721    3179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13132.pem
	I0615 10:18:41.697080    3179 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 15 17:14 /usr/share/ca-certificates/13132.pem
	I0615 10:18:41.697097    3179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13132.pem
	I0615 10:18:41.698865    3179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13132.pem /etc/ssl/certs/3ec20f2e.0"
	I0615 10:18:41.702093    3179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0615 10:18:41.705367    3179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0615 10:18:41.706950    3179 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 15 16:33 /usr/share/ca-certificates/minikubeCA.pem
	I0615 10:18:41.706971    3179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0615 10:18:41.708760    3179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0615 10:18:41.711435    3179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1313.pem && ln -fs /usr/share/ca-certificates/1313.pem /etc/ssl/certs/1313.pem"
	I0615 10:18:41.714457    3179 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1313.pem
	I0615 10:18:41.716038    3179 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 15 17:14 /usr/share/ca-certificates/1313.pem
	I0615 10:18:41.716059    3179 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1313.pem
	I0615 10:18:41.717755    3179 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1313.pem /etc/ssl/certs/51391683.0"
	I0615 10:18:41.720984    3179 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0615 10:18:41.722286    3179 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0615 10:18:41.722318    3179 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-422000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.18.20 ClusterName:ingress-addon-legacy-422000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:18:41.722387    3179 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0615 10:18:41.728024    3179 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0615 10:18:41.730913    3179 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0615 10:18:41.734111    3179 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0615 10:18:41.737264    3179 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0615 10:18:41.737277    3179 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0615 10:18:41.763167    3179 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0615 10:18:41.763195    3179 kubeadm.go:322] [preflight] Running pre-flight checks
	I0615 10:18:41.851647    3179 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0615 10:18:41.851750    3179 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0615 10:18:41.851801    3179 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0615 10:18:41.897840    3179 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0615 10:18:41.897896    3179 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0615 10:18:41.897925    3179 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0615 10:18:41.987709    3179 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0615 10:18:41.995896    3179 out.go:204]   - Generating certificates and keys ...
	I0615 10:18:41.995931    3179 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0615 10:18:41.995985    3179 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0615 10:18:42.164852    3179 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0615 10:18:42.247929    3179 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0615 10:18:42.295692    3179 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0615 10:18:42.557952    3179 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0615 10:18:42.637764    3179 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0615 10:18:42.637842    3179 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-422000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0615 10:18:42.692934    3179 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0615 10:18:42.693863    3179 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-422000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0615 10:18:42.849092    3179 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0615 10:18:43.120331    3179 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0615 10:18:43.201988    3179 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0615 10:18:43.202061    3179 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0615 10:18:43.241940    3179 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0615 10:18:43.333967    3179 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0615 10:18:43.503015    3179 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0615 10:18:43.594139    3179 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0615 10:18:43.594349    3179 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0615 10:18:43.598594    3179 out.go:204]   - Booting up control plane ...
	I0615 10:18:43.598661    3179 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0615 10:18:43.598699    3179 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0615 10:18:43.598768    3179 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0615 10:18:43.598897    3179 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0615 10:18:43.600316    3179 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0615 10:18:55.601440    3179 kubeadm.go:322] [apiclient] All control plane components are healthy after 12.001255 seconds
	I0615 10:18:55.601528    3179 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0615 10:18:55.608328    3179 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0615 10:18:56.134393    3179 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0615 10:18:56.134609    3179 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-422000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0615 10:18:56.638364    3179 kubeadm.go:322] [bootstrap-token] Using token: 8xscdf.frrwy2w1frmmnvfu
	I0615 10:18:56.641939    3179 out.go:204]   - Configuring RBAC rules ...
	I0615 10:18:56.642027    3179 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0615 10:18:56.642791    3179 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0615 10:18:56.646796    3179 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0615 10:18:56.647851    3179 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0615 10:18:56.648795    3179 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0615 10:18:56.649583    3179 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0615 10:18:56.653445    3179 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0615 10:18:56.835535    3179 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0615 10:18:57.057402    3179 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0615 10:18:57.058039    3179 kubeadm.go:322] 
	I0615 10:18:57.058088    3179 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0615 10:18:57.058095    3179 kubeadm.go:322] 
	I0615 10:18:57.058172    3179 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0615 10:18:57.058178    3179 kubeadm.go:322] 
	I0615 10:18:57.058234    3179 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0615 10:18:57.058282    3179 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0615 10:18:57.058349    3179 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0615 10:18:57.058360    3179 kubeadm.go:322] 
	I0615 10:18:57.058397    3179 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0615 10:18:57.058492    3179 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0615 10:18:57.058553    3179 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0615 10:18:57.058559    3179 kubeadm.go:322] 
	I0615 10:18:57.058634    3179 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0615 10:18:57.058728    3179 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0615 10:18:57.058734    3179 kubeadm.go:322] 
	I0615 10:18:57.058798    3179 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8xscdf.frrwy2w1frmmnvfu \
	I0615 10:18:57.058881    3179 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:13ec1e4f42825d0267b189c0f8b830eda54e60d63681b653b209256c58602b59 \
	I0615 10:18:57.058907    3179 kubeadm.go:322]     --control-plane 
	I0615 10:18:57.058910    3179 kubeadm.go:322] 
	I0615 10:18:57.058968    3179 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0615 10:18:57.058978    3179 kubeadm.go:322] 
	I0615 10:18:57.059036    3179 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8xscdf.frrwy2w1frmmnvfu \
	I0615 10:18:57.059138    3179 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:13ec1e4f42825d0267b189c0f8b830eda54e60d63681b653b209256c58602b59 
	I0615 10:18:57.059425    3179 kubeadm.go:322] W0615 17:18:42.141704    1414 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0615 10:18:57.059560    3179 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0615 10:18:57.059681    3179 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 19.03
	I0615 10:18:57.059774    3179 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0615 10:18:57.059891    3179 kubeadm.go:322] W0615 17:18:43.977265    1414 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0615 10:18:57.060009    3179 kubeadm.go:322] W0615 17:18:43.977998    1414 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0615 10:18:57.060019    3179 cni.go:84] Creating CNI manager for ""
	I0615 10:18:57.060031    3179 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0615 10:18:57.060051    3179 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0615 10:18:57.060149    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:18:57.060149    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=1814abc5e40384accb8747bfb7e33027343c9627 minikube.k8s.io/name=ingress-addon-legacy-422000 minikube.k8s.io/updated_at=2023_06_15T10_18_57_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:18:57.182748    3179 ops.go:34] apiserver oom_adj: -16
	I0615 10:18:57.182826    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:18:57.718474    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:18:58.218575    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:18:58.718219    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:18:59.218342    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:18:59.718354    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:19:00.218408    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:19:00.718391    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:19:01.218445    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:19:01.718426    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:19:02.218449    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:19:02.718204    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:19:03.218144    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:19:03.718363    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:19:04.218316    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:19:04.718187    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:19:05.218384    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:19:05.718063    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:19:06.218300    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:19:06.718289    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:19:07.218311    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:19:07.718262    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:19:08.218269    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:19:08.718299    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:19:09.218212    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:19:09.718251    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:19:10.218235    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:19:10.718266    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:19:11.218222    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:19:11.717152    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:19:12.218007    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:19:12.717996    3179 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0615 10:19:12.774397    3179 kubeadm.go:1081] duration metric: took 15.714648958s to wait for elevateKubeSystemPrivileges.
	I0615 10:19:12.774412    3179 kubeadm.go:406] StartCluster complete in 31.052826959s
	I0615 10:19:12.774421    3179 settings.go:142] acquiring lock: {Name:mk45a698fcd8dd8ae6984c9cf4ad4d183fdb5424 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:19:12.774518    3179 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:19:12.774906    3179 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/kubeconfig: {Name:mkbe9cac04fb467055323f2e3d5db2c6ddc287ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:19:12.775084    3179 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0615 10:19:12.775131    3179 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0615 10:19:12.775189    3179 addons.go:66] Setting storage-provisioner=true in profile "ingress-addon-legacy-422000"
	I0615 10:19:12.775197    3179 addons.go:228] Setting addon storage-provisioner=true in "ingress-addon-legacy-422000"
	I0615 10:19:12.775224    3179 host.go:66] Checking if "ingress-addon-legacy-422000" exists ...
	I0615 10:19:12.775229    3179 addons.go:66] Setting default-storageclass=true in profile "ingress-addon-legacy-422000"
	I0615 10:19:12.775237    3179 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-422000"
	I0615 10:19:12.775323    3179 kapi.go:59] client config for ingress-addon-legacy-422000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/client.key", CAFile:"/Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10390c3a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0615 10:19:12.775473    3179 config.go:182] Loaded profile config "ingress-addon-legacy-422000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0615 10:19:12.775756    3179 cert_rotation.go:137] Starting client certificate rotation controller
	I0615 10:19:12.776717    3179 kapi.go:59] client config for ingress-addon-legacy-422000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/client.key", CAFile:"/Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10390c3a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0615 10:19:12.780188    3179 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0615 10:19:12.784122    3179 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0615 10:19:12.784129    3179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0615 10:19:12.784139    3179 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/ingress-addon-legacy-422000/id_rsa Username:docker}
	I0615 10:19:12.792949    3179 addons.go:228] Setting addon default-storageclass=true in "ingress-addon-legacy-422000"
	I0615 10:19:12.792973    3179 host.go:66] Checking if "ingress-addon-legacy-422000" exists ...
	I0615 10:19:12.793707    3179 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0615 10:19:12.793714    3179 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0615 10:19:12.793724    3179 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/ingress-addon-legacy-422000/id_rsa Username:docker}
	W0615 10:19:12.797591    3179 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-422000" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0615 10:19:12.797607    3179 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0615 10:19:12.797617    3179 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:19:12.801133    3179 out.go:177] * Verifying Kubernetes components...
	I0615 10:19:12.809141    3179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0615 10:19:12.849152    3179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0615 10:19:12.851395    3179 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0615 10:19:12.871615    3179 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0615 10:19:12.871888    3179 kapi.go:59] client config for ingress-addon-legacy-422000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/client.key", CAFile:"/Users/jenkins/minikube-integration/16718-868/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10390c3a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0615 10:19:12.872044    3179 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-422000" to be "Ready" ...
	I0615 10:19:12.873458    3179 node_ready.go:49] node "ingress-addon-legacy-422000" has status "Ready":"True"
	I0615 10:19:12.873463    3179 node_ready.go:38] duration metric: took 1.41325ms waiting for node "ingress-addon-legacy-422000" to be "Ready" ...
	I0615 10:19:12.873466    3179 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0615 10:19:12.876790    3179 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-7s42m" in "kube-system" namespace to be "Ready" ...
	I0615 10:19:13.196650    3179 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0615 10:19:13.192598    3179 start.go:901] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0615 10:19:13.204138    3179 addons.go:499] enable addons completed in 429.029833ms: enabled=[default-storageclass storage-provisioner]
	I0615 10:19:14.890422    3179 pod_ready.go:102] pod "coredns-66bff467f8-7s42m" in "kube-system" namespace has status "Ready":"False"
	I0615 10:19:17.392325    3179 pod_ready.go:102] pod "coredns-66bff467f8-7s42m" in "kube-system" namespace has status "Ready":"False"
	I0615 10:19:17.893147    3179 pod_ready.go:92] pod "coredns-66bff467f8-7s42m" in "kube-system" namespace has status "Ready":"True"
	I0615 10:19:17.893183    3179 pod_ready.go:81] duration metric: took 5.016469458s waiting for pod "coredns-66bff467f8-7s42m" in "kube-system" namespace to be "Ready" ...
	I0615 10:19:17.893203    3179 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-czvhw" in "kube-system" namespace to be "Ready" ...
	I0615 10:19:19.915037    3179 pod_ready.go:102] pod "coredns-66bff467f8-czvhw" in "kube-system" namespace has status "Ready":"False"
	I0615 10:19:21.410562    3179 pod_ready.go:92] pod "coredns-66bff467f8-czvhw" in "kube-system" namespace has status "Ready":"True"
	I0615 10:19:21.410591    3179 pod_ready.go:81] duration metric: took 3.517438667s waiting for pod "coredns-66bff467f8-czvhw" in "kube-system" namespace to be "Ready" ...
	I0615 10:19:21.410604    3179 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-422000" in "kube-system" namespace to be "Ready" ...
	I0615 10:19:21.415814    3179 pod_ready.go:92] pod "etcd-ingress-addon-legacy-422000" in "kube-system" namespace has status "Ready":"True"
	I0615 10:19:21.415827    3179 pod_ready.go:81] duration metric: took 5.212208ms waiting for pod "etcd-ingress-addon-legacy-422000" in "kube-system" namespace to be "Ready" ...
	I0615 10:19:21.415836    3179 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-422000" in "kube-system" namespace to be "Ready" ...
	I0615 10:19:21.420548    3179 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-422000" in "kube-system" namespace has status "Ready":"True"
	I0615 10:19:21.420560    3179 pod_ready.go:81] duration metric: took 4.7145ms waiting for pod "kube-apiserver-ingress-addon-legacy-422000" in "kube-system" namespace to be "Ready" ...
	I0615 10:19:21.420569    3179 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-422000" in "kube-system" namespace to be "Ready" ...
	I0615 10:19:21.424801    3179 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-422000" in "kube-system" namespace has status "Ready":"True"
	I0615 10:19:21.424815    3179 pod_ready.go:81] duration metric: took 4.23975ms waiting for pod "kube-controller-manager-ingress-addon-legacy-422000" in "kube-system" namespace to be "Ready" ...
	I0615 10:19:21.424823    3179 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-l956h" in "kube-system" namespace to be "Ready" ...
	I0615 10:19:21.428699    3179 pod_ready.go:92] pod "kube-proxy-l956h" in "kube-system" namespace has status "Ready":"True"
	I0615 10:19:21.428710    3179 pod_ready.go:81] duration metric: took 3.881625ms waiting for pod "kube-proxy-l956h" in "kube-system" namespace to be "Ready" ...
	I0615 10:19:21.428716    3179 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-422000" in "kube-system" namespace to be "Ready" ...
	I0615 10:19:21.604575    3179 request.go:628] Waited for 175.800625ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-422000
	I0615 10:19:21.804659    3179 request.go:628] Waited for 197.618916ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-422000
	I0615 10:19:21.811346    3179 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-422000" in "kube-system" namespace has status "Ready":"True"
	I0615 10:19:21.811372    3179 pod_ready.go:81] duration metric: took 382.652875ms waiting for pod "kube-scheduler-ingress-addon-legacy-422000" in "kube-system" namespace to be "Ready" ...
	I0615 10:19:21.811387    3179 pod_ready.go:38] duration metric: took 8.938071458s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0615 10:19:21.811421    3179 api_server.go:52] waiting for apiserver process to appear ...
	I0615 10:19:21.811696    3179 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0615 10:19:21.827123    3179 api_server.go:72] duration metric: took 9.029650667s to wait for apiserver process to appear ...
	I0615 10:19:21.827142    3179 api_server.go:88] waiting for apiserver healthz status ...
	I0615 10:19:21.827159    3179 api_server.go:253] Checking apiserver healthz at https://192.168.105.6:8443/healthz ...
	I0615 10:19:21.836339    3179 api_server.go:279] https://192.168.105.6:8443/healthz returned 200:
	ok
	I0615 10:19:21.837357    3179 api_server.go:141] control plane version: v1.18.20
	I0615 10:19:21.837373    3179 api_server.go:131] duration metric: took 10.225292ms to wait for apiserver health ...
	I0615 10:19:21.837379    3179 system_pods.go:43] waiting for kube-system pods to appear ...
	I0615 10:19:22.004588    3179 request.go:628] Waited for 167.136625ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0615 10:19:22.019045    3179 system_pods.go:59] 8 kube-system pods found
	I0615 10:19:22.019088    3179 system_pods.go:61] "coredns-66bff467f8-7s42m" [08558845-5674-49a4-98e6-51821c0317b7] Running
	I0615 10:19:22.019099    3179 system_pods.go:61] "coredns-66bff467f8-czvhw" [9fb45ba8-0806-4593-b9fd-db7513fc6d86] Running
	I0615 10:19:22.019109    3179 system_pods.go:61] "etcd-ingress-addon-legacy-422000" [1e1e1425-dd76-4df6-bea7-0d76f4ddcccb] Running
	I0615 10:19:22.019118    3179 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-422000" [501c89bf-cf83-457c-8a4c-6297f29db154] Running
	I0615 10:19:22.019132    3179 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-422000" [3091ecf4-e550-4c6b-aaf8-1ca41d93f19c] Running
	I0615 10:19:22.019144    3179 system_pods.go:61] "kube-proxy-l956h" [a9fb13ea-f3cc-42d1-a60a-081fb310767c] Running
	I0615 10:19:22.019157    3179 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-422000" [4f5abb8a-69ae-4054-9f23-cf22ffdd9b7b] Running
	I0615 10:19:22.019169    3179 system_pods.go:61] "storage-provisioner" [8bb3bd84-4bbd-4471-84ed-527d364c3b59] Running
	I0615 10:19:22.019177    3179 system_pods.go:74] duration metric: took 181.794833ms to wait for pod list to return data ...
	I0615 10:19:22.019201    3179 default_sa.go:34] waiting for default service account to be created ...
	I0615 10:19:22.204588    3179 request.go:628] Waited for 185.232209ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/default/serviceaccounts
	I0615 10:19:22.210564    3179 default_sa.go:45] found service account: "default"
	I0615 10:19:22.210595    3179 default_sa.go:55] duration metric: took 191.383791ms for default service account to be created ...
	I0615 10:19:22.210611    3179 system_pods.go:116] waiting for k8s-apps to be running ...
	I0615 10:19:22.404640    3179 request.go:628] Waited for 193.884ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0615 10:19:22.418255    3179 system_pods.go:86] 8 kube-system pods found
	I0615 10:19:22.418288    3179 system_pods.go:89] "coredns-66bff467f8-7s42m" [08558845-5674-49a4-98e6-51821c0317b7] Running
	I0615 10:19:22.418300    3179 system_pods.go:89] "coredns-66bff467f8-czvhw" [9fb45ba8-0806-4593-b9fd-db7513fc6d86] Running
	I0615 10:19:22.418313    3179 system_pods.go:89] "etcd-ingress-addon-legacy-422000" [1e1e1425-dd76-4df6-bea7-0d76f4ddcccb] Running
	I0615 10:19:22.418327    3179 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-422000" [501c89bf-cf83-457c-8a4c-6297f29db154] Running
	I0615 10:19:22.418338    3179 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-422000" [3091ecf4-e550-4c6b-aaf8-1ca41d93f19c] Running
	I0615 10:19:22.418347    3179 system_pods.go:89] "kube-proxy-l956h" [a9fb13ea-f3cc-42d1-a60a-081fb310767c] Running
	I0615 10:19:22.418398    3179 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-422000" [4f5abb8a-69ae-4054-9f23-cf22ffdd9b7b] Running
	I0615 10:19:22.418413    3179 system_pods.go:89] "storage-provisioner" [8bb3bd84-4bbd-4471-84ed-527d364c3b59] Running
	I0615 10:19:22.418425    3179 system_pods.go:126] duration metric: took 207.801833ms to wait for k8s-apps to be running ...
	I0615 10:19:22.418440    3179 system_svc.go:44] waiting for kubelet service to be running ....
	I0615 10:19:22.418687    3179 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0615 10:19:22.435193    3179 system_svc.go:56] duration metric: took 16.75275ms WaitForService to wait for kubelet.
	I0615 10:19:22.435210    3179 kubeadm.go:581] duration metric: took 9.6377525s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0615 10:19:22.435229    3179 node_conditions.go:102] verifying NodePressure condition ...
	I0615 10:19:22.604596    3179 request.go:628] Waited for 169.273125ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes
	I0615 10:19:22.613200    3179 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0615 10:19:22.613263    3179 node_conditions.go:123] node cpu capacity is 2
	I0615 10:19:22.613293    3179 node_conditions.go:105] duration metric: took 178.058542ms to run NodePressure ...
	I0615 10:19:22.613324    3179 start.go:228] waiting for startup goroutines ...
	I0615 10:19:22.613340    3179 start.go:233] waiting for cluster config update ...
	I0615 10:19:22.613374    3179 start.go:242] writing updated cluster config ...
	I0615 10:19:22.614665    3179 ssh_runner.go:195] Run: rm -f paused
	I0615 10:19:22.759762    3179 start.go:582] kubectl: 1.25.9, cluster: 1.18.20 (minor skew: 7)
	I0615 10:19:22.764502    3179 out.go:177] 
	W0615 10:19:22.768485    3179 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.18.20.
	I0615 10:19:22.772494    3179 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0615 10:19:22.780605    3179 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-422000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-06-15 17:18:29 UTC, ends at Thu 2023-06-15 17:20:31 UTC. --
	Jun 15 17:20:02 ingress-addon-legacy-422000 dockerd[1072]: time="2023-06-15T17:20:02.516239326Z" level=info msg="shim disconnected" id=9f357bc91d7746fea15142d2b74db1a9387cda0a884b73b7c53266f8099570c6 namespace=moby
	Jun 15 17:20:02 ingress-addon-legacy-422000 dockerd[1072]: time="2023-06-15T17:20:02.516267370Z" level=warning msg="cleaning up after shim disconnected" id=9f357bc91d7746fea15142d2b74db1a9387cda0a884b73b7c53266f8099570c6 namespace=moby
	Jun 15 17:20:02 ingress-addon-legacy-422000 dockerd[1072]: time="2023-06-15T17:20:02.516271412Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 15 17:20:15 ingress-addon-legacy-422000 dockerd[1072]: time="2023-06-15T17:20:15.728417498Z" level=info msg="shim disconnected" id=6cfe53247a24348ac0505fd6d320d94d780c42316630343edb62893f1ad5dfbb namespace=moby
	Jun 15 17:20:15 ingress-addon-legacy-422000 dockerd[1072]: time="2023-06-15T17:20:15.728622964Z" level=warning msg="cleaning up after shim disconnected" id=6cfe53247a24348ac0505fd6d320d94d780c42316630343edb62893f1ad5dfbb namespace=moby
	Jun 15 17:20:15 ingress-addon-legacy-422000 dockerd[1072]: time="2023-06-15T17:20:15.728636090Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 15 17:20:15 ingress-addon-legacy-422000 dockerd[1066]: time="2023-06-15T17:20:15.746522634Z" level=info msg="ignoring event" container=6cfe53247a24348ac0505fd6d320d94d780c42316630343edb62893f1ad5dfbb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 15 17:20:15 ingress-addon-legacy-422000 dockerd[1072]: time="2023-06-15T17:20:15.757167199Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 15 17:20:15 ingress-addon-legacy-422000 dockerd[1072]: time="2023-06-15T17:20:15.757269870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 17:20:15 ingress-addon-legacy-422000 dockerd[1072]: time="2023-06-15T17:20:15.757282829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 15 17:20:15 ingress-addon-legacy-422000 dockerd[1072]: time="2023-06-15T17:20:15.757291787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 15 17:20:15 ingress-addon-legacy-422000 dockerd[1072]: time="2023-06-15T17:20:15.795610596Z" level=info msg="shim disconnected" id=b1f326576cf439d0803e3b7b95eec448b19aaf92db368906318656075c2b98ca namespace=moby
	Jun 15 17:20:15 ingress-addon-legacy-422000 dockerd[1066]: time="2023-06-15T17:20:15.795561761Z" level=info msg="ignoring event" container=b1f326576cf439d0803e3b7b95eec448b19aaf92db368906318656075c2b98ca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 15 17:20:15 ingress-addon-legacy-422000 dockerd[1072]: time="2023-06-15T17:20:15.795738851Z" level=warning msg="cleaning up after shim disconnected" id=b1f326576cf439d0803e3b7b95eec448b19aaf92db368906318656075c2b98ca namespace=moby
	Jun 15 17:20:15 ingress-addon-legacy-422000 dockerd[1072]: time="2023-06-15T17:20:15.795760519Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 15 17:20:26 ingress-addon-legacy-422000 dockerd[1066]: time="2023-06-15T17:20:26.151516047Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=c36324fe8f6395d2d1cadd851edaad947f17617f9705e6fd1014bf7df8dae7e1
	Jun 15 17:20:26 ingress-addon-legacy-422000 dockerd[1066]: time="2023-06-15T17:20:26.178360809Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=c36324fe8f6395d2d1cadd851edaad947f17617f9705e6fd1014bf7df8dae7e1
	Jun 15 17:20:26 ingress-addon-legacy-422000 dockerd[1072]: time="2023-06-15T17:20:26.232252174Z" level=info msg="shim disconnected" id=c36324fe8f6395d2d1cadd851edaad947f17617f9705e6fd1014bf7df8dae7e1 namespace=moby
	Jun 15 17:20:26 ingress-addon-legacy-422000 dockerd[1066]: time="2023-06-15T17:20:26.232615601Z" level=info msg="ignoring event" container=c36324fe8f6395d2d1cadd851edaad947f17617f9705e6fd1014bf7df8dae7e1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 15 17:20:26 ingress-addon-legacy-422000 dockerd[1072]: time="2023-06-15T17:20:26.232658603Z" level=warning msg="cleaning up after shim disconnected" id=c36324fe8f6395d2d1cadd851edaad947f17617f9705e6fd1014bf7df8dae7e1 namespace=moby
	Jun 15 17:20:26 ingress-addon-legacy-422000 dockerd[1072]: time="2023-06-15T17:20:26.232669061Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 15 17:20:26 ingress-addon-legacy-422000 dockerd[1072]: time="2023-06-15T17:20:26.270504362Z" level=info msg="shim disconnected" id=a2c0af188c508daa9c5db04e0c3685fafb849310539298836d502ac2707562d5 namespace=moby
	Jun 15 17:20:26 ingress-addon-legacy-422000 dockerd[1072]: time="2023-06-15T17:20:26.270538154Z" level=warning msg="cleaning up after shim disconnected" id=a2c0af188c508daa9c5db04e0c3685fafb849310539298836d502ac2707562d5 namespace=moby
	Jun 15 17:20:26 ingress-addon-legacy-422000 dockerd[1072]: time="2023-06-15T17:20:26.270543196Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 15 17:20:26 ingress-addon-legacy-422000 dockerd[1066]: time="2023-06-15T17:20:26.270657950Z" level=info msg="ignoring event" container=a2c0af188c508daa9c5db04e0c3685fafb849310539298836d502ac2707562d5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	b1f326576cf43       13753a81eccfd                                                                                                      16 seconds ago       Exited              hello-world-app           2                   5dbb3117b913c
	8ef334f640d27       nginx@sha256:9b0582aaf2b2d6ffc2451630c28cb2b0019905f1bee8a38add596b4904522381                                      39 seconds ago       Running             nginx                     0                   0476725a782a4
	c36324fe8f639       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   53 seconds ago       Exited              controller                0                   a2c0af188c508
	548fe5296e0da       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              patch                     0                   6e7bc0de0670a
	a05730ca497d9       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              create                    0                   4208c91ca25e2
	847f07b801ed6       gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944    About a minute ago   Running             storage-provisioner       0                   2b88c6127aba8
	765cbeb0cc24f       6e17ba78cf3eb                                                                                                      About a minute ago   Running             coredns                   0                   52673bd380abf
	38f71c03383d0       6e17ba78cf3eb                                                                                                      About a minute ago   Running             coredns                   0                   98686b4fd400d
	b9310faf51eb5       565297bc6f7d4                                                                                                      About a minute ago   Running             kube-proxy                0                   aa6611d324e79
	f18012b2fb0c6       095f37015706d                                                                                                      About a minute ago   Running             kube-scheduler            0                   27a84c8b81823
	8a62cfb18b17c       ab707b0a0ea33                                                                                                      About a minute ago   Running             etcd                      0                   2d50143dde510
	8d9ee5bf2f982       2694cf044d665                                                                                                      About a minute ago   Running             kube-apiserver            0                   1ac83f8022d7c
	e94bdd860ccb3       68a4fac29a865                                                                                                      About a minute ago   Running             kube-controller-manager   0                   afc33b9deee81
	
	* 
	* ==> coredns [38f71c03383d] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
	CoreDNS-1.6.7
	linux/arm64, go1.13.6, da7f65b
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = cb78e7fd356afb50fc9964e5378f29cc
	[INFO] Reloading complete
	[INFO] 172.17.0.1:35165 - 13326 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000073796s
	[INFO] 172.17.0.1:35165 - 22390 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000049045s
	[INFO] 172.17.0.1:35165 - 30308 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000042294s
	[INFO] 172.17.0.1:35165 - 18509 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000028377s
	[INFO] 172.17.0.1:35165 - 26302 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000059629s
	[INFO] 172.17.0.1:35165 - 58463 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000030377s
	[INFO] 172.17.0.1:35165 - 41180 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000036752s
	[INFO] 172.17.0.1:45502 - 37136 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000040169s
	[INFO] 172.17.0.1:45502 - 10447 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000016959s
	[INFO] 172.17.0.1:45502 - 24958 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000014501s
	[INFO] 172.17.0.1:45502 - 657 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000015709s
	[INFO] 172.17.0.1:45502 - 41809 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000012751s
	[INFO] 172.17.0.1:45502 - 10274 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000014501s
	[INFO] 172.17.0.1:45502 - 47255 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000014709s
	
	* 
	* ==> coredns [765cbeb0cc24] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
	CoreDNS-1.6.7
	linux/arm64, go1.13.6, da7f65b
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = cb78e7fd356afb50fc9964e5378f29cc
	[INFO] Reloading complete
	[INFO] 127.0.0.1:53568 - 61939 "HINFO IN 5688182998115856670.3724723525383115377. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004464956s
	[INFO] 172.17.0.1:21939 - 54171 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000100798s
	[INFO] 172.17.0.1:21939 - 2680 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000065004s
	[INFO] 172.17.0.1:21939 - 45056 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000050295s
	[INFO] 172.17.0.1:21939 - 8588 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000030543s
	[INFO] 172.17.0.1:21939 - 23119 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000028001s
	[INFO] 172.17.0.1:21939 - 19530 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000040128s
	[INFO] 172.17.0.1:21939 - 5678 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000035627s
	[INFO] 172.17.0.1:3807 - 11686 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000020335s
	[INFO] 172.17.0.1:3807 - 9981 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000013334s
	[INFO] 172.17.0.1:3807 - 52853 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000013168s
	[INFO] 172.17.0.1:3807 - 13429 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00000775s
	[INFO] 172.17.0.1:3807 - 34102 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000044711s
	[INFO] 172.17.0.1:3807 - 55618 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000017001s
	[INFO] 172.17.0.1:3807 - 41295 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000017293s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-422000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-422000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1814abc5e40384accb8747bfb7e33027343c9627
	                    minikube.k8s.io/name=ingress-addon-legacy-422000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_15T10_18_57_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Jun 2023 17:18:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-422000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Jun 2023 17:20:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Jun 2023 17:20:03 +0000   Thu, 15 Jun 2023 17:18:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Jun 2023 17:20:03 +0000   Thu, 15 Jun 2023 17:18:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Jun 2023 17:20:03 +0000   Thu, 15 Jun 2023 17:18:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Jun 2023 17:20:03 +0000   Thu, 15 Jun 2023 17:19:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ingress-addon-legacy-422000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003124Ki
	  pods:               110
	System Info:
	  Machine ID:                 f2cc99f8159743bfb36011b9e851e6bf
	  System UUID:                f2cc99f8159743bfb36011b9e851e6bf
	  Boot ID:                    d63b0253-1176-466c-9f6c-543f1924a2e6
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-hvsxd                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                 coredns-66bff467f8-7s42m                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     79s
	  kube-system                 coredns-66bff467f8-czvhw                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     79s
	  kube-system                 etcd-ingress-addon-legacy-422000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-apiserver-ingress-addon-legacy-422000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-422000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kube-system                 kube-proxy-l956h                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-scheduler-ingress-addon-legacy-422000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             140Mi (3%!)(MISSING)  340Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 88s   kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  88s   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  88s   kubelet     Node ingress-addon-legacy-422000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    88s   kubelet     Node ingress-addon-legacy-422000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s   kubelet     Node ingress-addon-legacy-422000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                88s   kubelet     Node ingress-addon-legacy-422000 status is now: NodeReady
	  Normal  Starting                 78s   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Jun15 17:18] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.631143] EINJ: EINJ table not found.
	[  +0.497134] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +0.043685] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000829] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +3.198030] systemd-fstab-generator[480]: Ignoring "noauto" for root device
	[  +0.081998] systemd-fstab-generator[491]: Ignoring "noauto" for root device
	[  +0.439163] systemd-fstab-generator[708]: Ignoring "noauto" for root device
	[  +0.172614] systemd-fstab-generator[745]: Ignoring "noauto" for root device
	[  +0.082712] systemd-fstab-generator[756]: Ignoring "noauto" for root device
	[  +0.083898] systemd-fstab-generator[769]: Ignoring "noauto" for root device
	[  +1.149313] kauditd_printk_skb: 17 callbacks suppressed
	[  +3.241535] systemd-fstab-generator[1059]: Ignoring "noauto" for root device
	[  +4.860650] systemd-fstab-generator[1538]: Ignoring "noauto" for root device
	[  +8.614370] kauditd_printk_skb: 68 callbacks suppressed
	[  +0.063089] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +6.076044] systemd-fstab-generator[2620]: Ignoring "noauto" for root device
	[Jun15 17:19] kauditd_printk_skb: 7 callbacks suppressed
	[  +7.818538] kauditd_printk_skb: 10 callbacks suppressed
	[  +8.039122] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[ +29.938121] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [8a62cfb18b17] <==
	* raft2023/06/15 17:18:52 INFO: ed054832bd1917e1 became follower at term 0
	raft2023/06/15 17:18:52 INFO: newRaft ed054832bd1917e1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/06/15 17:18:52 INFO: ed054832bd1917e1 became follower at term 1
	raft2023/06/15 17:18:52 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-06-15 17:18:52.380124 W | auth: simple token is not cryptographically signed
	2023-06-15 17:18:52.380959 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-06-15 17:18:52.393559 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-06-15 17:18:52.393626 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-06-15 17:18:52.393716 I | embed: listening for peers on 192.168.105.6:2380
	2023-06-15 17:18:52.393772 I | etcdserver: ed054832bd1917e1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/06/15 17:18:52 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-06-15 17:18:52.393993 I | etcdserver/membership: added member ed054832bd1917e1 [https://192.168.105.6:2380] to cluster 45a39c2c59b0edf4
	raft2023/06/15 17:18:53 INFO: ed054832bd1917e1 is starting a new election at term 1
	raft2023/06/15 17:18:53 INFO: ed054832bd1917e1 became candidate at term 2
	raft2023/06/15 17:18:53 INFO: ed054832bd1917e1 received MsgVoteResp from ed054832bd1917e1 at term 2
	raft2023/06/15 17:18:53 INFO: ed054832bd1917e1 became leader at term 2
	raft2023/06/15 17:18:53 INFO: raft.node: ed054832bd1917e1 elected leader ed054832bd1917e1 at term 2
	2023-06-15 17:18:53.168475 I | etcdserver: published {Name:ingress-addon-legacy-422000 ClientURLs:[https://192.168.105.6:2379]} to cluster 45a39c2c59b0edf4
	2023-06-15 17:18:53.168593 I | etcdserver: setting up the initial cluster version to 3.4
	2023-06-15 17:18:53.168713 I | embed: ready to serve client requests
	2023-06-15 17:18:53.169091 I | embed: ready to serve client requests
	2023-06-15 17:18:53.171286 I | embed: serving client requests on 127.0.0.1:2379
	2023-06-15 17:18:53.171393 I | embed: serving client requests on 192.168.105.6:2379
	2023-06-15 17:18:53.179217 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-06-15 17:18:53.179276 I | etcdserver/api: enabled capabilities for version 3.4
	
	* 
	* ==> kernel <==
	*  17:20:31 up 2 min,  0 users,  load average: 0.69, 0.27, 0.10
	Linux ingress-addon-legacy-422000 5.10.57 #1 SMP PREEMPT Wed Jun 14 05:08:37 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [8d9ee5bf2f98] <==
	* I0615 17:18:54.829872       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0615 17:18:54.829888       1 cache.go:39] Caches are synced for autoregister controller
	I0615 17:18:54.829980       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0615 17:18:54.829893       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0615 17:18:55.725111       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0615 17:18:55.725542       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0615 17:18:55.737035       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0615 17:18:55.743239       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0615 17:18:55.743422       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0615 17:18:55.884750       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0615 17:18:55.894908       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0615 17:18:55.998069       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.105.6]
	I0615 17:18:55.998584       1 controller.go:609] quota admission added evaluator for: endpoints
	I0615 17:18:55.999939       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0615 17:18:57.051257       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0615 17:18:57.210253       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0615 17:18:57.429905       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0615 17:18:57.560711       1 log.go:172] http: TLS handshake error from 127.0.0.1:60246: read tcp 127.0.0.1:8443->127.0.0.1:60246: read: connection reset by peer
	I0615 17:18:57.560749       1 log.go:172] http: TLS handshake error from 127.0.0.1:60252: read tcp 127.0.0.1:8443->127.0.0.1:60252: read: connection reset by peer
	I0615 17:18:57.560761       1 log.go:172] http: TLS handshake error from 127.0.0.1:60260: EOF
	I0615 17:19:03.584997       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0615 17:19:12.717678       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0615 17:19:12.866832       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0615 17:19:23.046482       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0615 17:19:48.600356       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [e94bdd860ccb] <==
	* I0615 17:19:12.754316       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0615 17:19:12.796860       1 shared_informer.go:230] Caches are synced for job 
	I0615 17:19:12.800076       1 shared_informer.go:230] Caches are synced for HPA 
	I0615 17:19:12.865306       1 shared_informer.go:230] Caches are synced for deployment 
	I0615 17:19:12.869006       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"20877001-8bd5-40fd-aef6-ac6ff0639cd1", APIVersion:"apps/v1", ResourceVersion:"201", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
	I0615 17:19:12.871766       1 shared_informer.go:230] Caches are synced for ReplicaSet 
	I0615 17:19:12.875860       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"a299dfd2-2cfa-4b0d-87b7-367eb074d4bc", APIVersion:"apps/v1", ResourceVersion:"326", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-czvhw
	I0615 17:19:12.882449       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"a299dfd2-2cfa-4b0d-87b7-367eb074d4bc", APIVersion:"apps/v1", ResourceVersion:"326", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-7s42m
	I0615 17:19:12.927550       1 shared_informer.go:230] Caches are synced for disruption 
	I0615 17:19:12.927564       1 disruption.go:339] Sending events to api server.
	I0615 17:19:12.967154       1 shared_informer.go:230] Caches are synced for resource quota 
	I0615 17:19:13.064318       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0615 17:19:13.064332       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0615 17:19:13.115160       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0615 17:19:13.313179       1 request.go:621] Throttling request took 1.047917176s, request: GET:https://control-plane.minikube.internal:8443/apis/autoscaling/v2beta2?timeout=32s
	I0615 17:19:13.913922       1 shared_informer.go:223] Waiting for caches to sync for resource quota
	I0615 17:19:13.914041       1 shared_informer.go:230] Caches are synced for resource quota 
	I0615 17:19:23.042826       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"f697ff68-4c6a-4fe2-92f9-e6ac6b4574a8", APIVersion:"apps/v1", ResourceVersion:"431", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0615 17:19:23.051300       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"cfd9f535-4ecc-4dbf-a994-6d6121600ebd", APIVersion:"apps/v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-mrcwq
	I0615 17:19:23.061883       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"5fa38405-498b-467b-b2ad-98560f3cac74", APIVersion:"batch/v1", ResourceVersion:"435", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-7m6q2
	I0615 17:19:23.067964       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"a7f282ff-6d30-48cc-9deb-49fec0629a49", APIVersion:"batch/v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-v2qhd
	I0615 17:19:25.860083       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"5fa38405-498b-467b-b2ad-98560f3cac74", APIVersion:"batch/v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0615 17:19:26.897239       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"a7f282ff-6d30-48cc-9deb-49fec0629a49", APIVersion:"batch/v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0615 17:19:58.878663       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"f9b2649f-b35d-4b2d-9fab-7ce5d1491740", APIVersion:"apps/v1", ResourceVersion:"566", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0615 17:19:58.884283       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"5f67f883-03be-4225-bfbc-13efd244271c", APIVersion:"apps/v1", ResourceVersion:"567", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-hvsxd
	
	* 
	* ==> kube-proxy [b9310faf51eb] <==
	* W0615 17:19:13.432828       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0615 17:19:13.437218       1 node.go:136] Successfully retrieved node IP: 192.168.105.6
	I0615 17:19:13.437238       1 server_others.go:186] Using iptables Proxier.
	I0615 17:19:13.437403       1 server.go:583] Version: v1.18.20
	I0615 17:19:13.438672       1 config.go:315] Starting service config controller
	I0615 17:19:13.438691       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0615 17:19:13.438730       1 config.go:133] Starting endpoints config controller
	I0615 17:19:13.438733       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0615 17:19:13.538811       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0615 17:19:13.538877       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [f18012b2fb0c] <==
	* W0615 17:18:54.782267       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0615 17:18:54.782298       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0615 17:18:54.790318       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0615 17:18:54.790331       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0615 17:18:54.791253       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0615 17:18:54.791294       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0615 17:18:54.791300       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0615 17:18:54.791313       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0615 17:18:54.792718       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0615 17:18:54.792791       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0615 17:18:54.794986       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0615 17:18:54.795157       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0615 17:18:54.795217       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0615 17:18:54.795282       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0615 17:18:54.795325       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0615 17:18:54.795365       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0615 17:18:54.795415       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0615 17:18:54.795457       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0615 17:18:54.795510       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0615 17:18:54.795597       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0615 17:18:55.646429       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0615 17:18:55.706160       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0615 17:18:55.706863       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0615 17:18:55.830208       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0615 17:18:56.291501       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-06-15 17:18:29 UTC, ends at Thu 2023-06-15 17:20:31 UTC. --
	Jun 15 17:20:04 ingress-addon-legacy-422000 kubelet[2627]: I0615 17:20:04.491386    2627 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9f357bc91d7746fea15142d2b74db1a9387cda0a884b73b7c53266f8099570c6
	Jun 15 17:20:04 ingress-addon-legacy-422000 kubelet[2627]: E0615 17:20:04.492255    2627 pod_workers.go:191] Error syncing pod 01a56e1f-bcd1-4137-b9e5-b423166a1433 ("hello-world-app-5f5d8b66bb-hvsxd_default(01a56e1f-bcd1-4137-b9e5-b423166a1433)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-hvsxd_default(01a56e1f-bcd1-4137-b9e5-b423166a1433)"
	Jun 15 17:20:14 ingress-addon-legacy-422000 kubelet[2627]: I0615 17:20:14.332798    2627 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-tp5rs" (UniqueName: "kubernetes.io/secret/8f972d51-9f36-4690-bb47-0225f63ae276-minikube-ingress-dns-token-tp5rs") pod "8f972d51-9f36-4690-bb47-0225f63ae276" (UID: "8f972d51-9f36-4690-bb47-0225f63ae276")
	Jun 15 17:20:14 ingress-addon-legacy-422000 kubelet[2627]: I0615 17:20:14.337853    2627 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f972d51-9f36-4690-bb47-0225f63ae276-minikube-ingress-dns-token-tp5rs" (OuterVolumeSpecName: "minikube-ingress-dns-token-tp5rs") pod "8f972d51-9f36-4690-bb47-0225f63ae276" (UID: "8f972d51-9f36-4690-bb47-0225f63ae276"). InnerVolumeSpecName "minikube-ingress-dns-token-tp5rs". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 15 17:20:14 ingress-addon-legacy-422000 kubelet[2627]: I0615 17:20:14.432998    2627 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-tp5rs" (UniqueName: "kubernetes.io/secret/8f972d51-9f36-4690-bb47-0225f63ae276-minikube-ingress-dns-token-tp5rs") on node "ingress-addon-legacy-422000" DevicePath ""
	Jun 15 17:20:15 ingress-addon-legacy-422000 kubelet[2627]: I0615 17:20:15.665263    2627 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9f357bc91d7746fea15142d2b74db1a9387cda0a884b73b7c53266f8099570c6
	Jun 15 17:20:15 ingress-addon-legacy-422000 kubelet[2627]: W0615 17:20:15.811368    2627 container.go:412] Failed to create summary reader for "/kubepods/besteffort/pod01a56e1f-bcd1-4137-b9e5-b423166a1433/b1f326576cf439d0803e3b7b95eec448b19aaf92db368906318656075c2b98ca": none of the resources are being tracked.
	Jun 15 17:20:16 ingress-addon-legacy-422000 kubelet[2627]: W0615 17:20:16.695770    2627 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-hvsxd through plugin: invalid network status for
	Jun 15 17:20:16 ingress-addon-legacy-422000 kubelet[2627]: I0615 17:20:16.703090    2627 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9f357bc91d7746fea15142d2b74db1a9387cda0a884b73b7c53266f8099570c6
	Jun 15 17:20:16 ingress-addon-legacy-422000 kubelet[2627]: I0615 17:20:16.704772    2627 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: b1f326576cf439d0803e3b7b95eec448b19aaf92db368906318656075c2b98ca
	Jun 15 17:20:16 ingress-addon-legacy-422000 kubelet[2627]: E0615 17:20:16.705162    2627 pod_workers.go:191] Error syncing pod 01a56e1f-bcd1-4137-b9e5-b423166a1433 ("hello-world-app-5f5d8b66bb-hvsxd_default(01a56e1f-bcd1-4137-b9e5-b423166a1433)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-hvsxd_default(01a56e1f-bcd1-4137-b9e5-b423166a1433)"
	Jun 15 17:20:16 ingress-addon-legacy-422000 kubelet[2627]: I0615 17:20:16.723444    2627 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 66f9e5ecf4e19e22d0b8c992b440713189acead8e7e0f00b2646a82dc4b0f69a
	Jun 15 17:20:17 ingress-addon-legacy-422000 kubelet[2627]: W0615 17:20:17.739825    2627 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-hvsxd through plugin: invalid network status for
	Jun 15 17:20:24 ingress-addon-legacy-422000 kubelet[2627]: E0615 17:20:24.142203    2627 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-mrcwq.1768e4cbbc601c69", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-mrcwq", UID:"7444e307-a25d-4ccf-8c2f-2a8b4ef80269", APIVersion:"v1", ResourceVersion:"440", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-422000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc11aeff6085aac69, ext:86958987301, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc11aeff6085aac69, ext:86958987301, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-mrcwq.1768e4cbbc601c69" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jun 15 17:20:24 ingress-addon-legacy-422000 kubelet[2627]: E0615 17:20:24.171204    2627 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-mrcwq.1768e4cbbc601c69", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-mrcwq", UID:"7444e307-a25d-4ccf-8c2f-2a8b4ef80269", APIVersion:"v1", ResourceVersion:"440", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-422000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc11aeff6085aac69, ext:86958987301, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc11aeff608ca99f8, ext:86966322611, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-mrcwq.1768e4cbbc601c69" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jun 15 17:20:26 ingress-addon-legacy-422000 kubelet[2627]: W0615 17:20:26.895229    2627 pod_container_deletor.go:77] Container "a2c0af188c508daa9c5db04e0c3685fafb849310539298836d502ac2707562d5" not found in pod's containers
	Jun 15 17:20:28 ingress-addon-legacy-422000 kubelet[2627]: I0615 17:20:28.298262    2627 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-nc796" (UniqueName: "kubernetes.io/secret/7444e307-a25d-4ccf-8c2f-2a8b4ef80269-ingress-nginx-token-nc796") pod "7444e307-a25d-4ccf-8c2f-2a8b4ef80269" (UID: "7444e307-a25d-4ccf-8c2f-2a8b4ef80269")
	Jun 15 17:20:28 ingress-addon-legacy-422000 kubelet[2627]: I0615 17:20:28.298368    2627 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7444e307-a25d-4ccf-8c2f-2a8b4ef80269-webhook-cert") pod "7444e307-a25d-4ccf-8c2f-2a8b4ef80269" (UID: "7444e307-a25d-4ccf-8c2f-2a8b4ef80269")
	Jun 15 17:20:28 ingress-addon-legacy-422000 kubelet[2627]: I0615 17:20:28.304218    2627 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7444e307-a25d-4ccf-8c2f-2a8b4ef80269-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7444e307-a25d-4ccf-8c2f-2a8b4ef80269" (UID: "7444e307-a25d-4ccf-8c2f-2a8b4ef80269"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 15 17:20:28 ingress-addon-legacy-422000 kubelet[2627]: I0615 17:20:28.306836    2627 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7444e307-a25d-4ccf-8c2f-2a8b4ef80269-ingress-nginx-token-nc796" (OuterVolumeSpecName: "ingress-nginx-token-nc796") pod "7444e307-a25d-4ccf-8c2f-2a8b4ef80269" (UID: "7444e307-a25d-4ccf-8c2f-2a8b4ef80269"). InnerVolumeSpecName "ingress-nginx-token-nc796". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 15 17:20:28 ingress-addon-legacy-422000 kubelet[2627]: I0615 17:20:28.399382    2627 reconciler.go:319] Volume detached for volume "ingress-nginx-token-nc796" (UniqueName: "kubernetes.io/secret/7444e307-a25d-4ccf-8c2f-2a8b4ef80269-ingress-nginx-token-nc796") on node "ingress-addon-legacy-422000" DevicePath ""
	Jun 15 17:20:28 ingress-addon-legacy-422000 kubelet[2627]: I0615 17:20:28.399531    2627 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7444e307-a25d-4ccf-8c2f-2a8b4ef80269-webhook-cert") on node "ingress-addon-legacy-422000" DevicePath ""
	Jun 15 17:20:29 ingress-addon-legacy-422000 kubelet[2627]: I0615 17:20:29.652508    2627 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: b1f326576cf439d0803e3b7b95eec448b19aaf92db368906318656075c2b98ca
	Jun 15 17:20:29 ingress-addon-legacy-422000 kubelet[2627]: E0615 17:20:29.654387    2627 pod_workers.go:191] Error syncing pod 01a56e1f-bcd1-4137-b9e5-b423166a1433 ("hello-world-app-5f5d8b66bb-hvsxd_default(01a56e1f-bcd1-4137-b9e5-b423166a1433)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-hvsxd_default(01a56e1f-bcd1-4137-b9e5-b423166a1433)"
	Jun 15 17:20:29 ingress-addon-legacy-422000 kubelet[2627]: W0615 17:20:29.684950    2627 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/7444e307-a25d-4ccf-8c2f-2a8b4ef80269/volumes" does not exist
	
	* 
	* ==> storage-provisioner [847f07b801ed] <==
	* I0615 17:19:16.230181       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0615 17:19:16.234566       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0615 17:19:16.234632       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0615 17:19:16.240925       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0615 17:19:16.241010       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-422000_5c6137c7-3054-447d-99db-c2a2c2463e0e!
	I0615 17:19:16.241502       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"51d53eb6-7578-4f2c-9bbd-f5cc5fe5bb2e", APIVersion:"v1", ResourceVersion:"382", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-422000_5c6137c7-3054-447d-99db-c2a2c2463e0e became leader
	I0615 17:19:16.342014       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-422000_5c6137c7-3054-447d-99db-c2a2c2463e0e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-422000 -n ingress-addon-legacy-422000
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-422000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (52.19s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.22s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-963000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
E0615 10:22:39.215148    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/functional-822000/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-963000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.153612292s)

                                                
                                                
-- stdout --
	* [mount-start-1-963000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-963000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-963000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-963000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-963000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-963000 -n mount-start-1-963000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-963000 -n mount-start-1-963000: exit status 7 (68.980084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-963000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.22s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-506000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-506000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.702553292s)

                                                
                                                
-- stdout --
	* [multinode-506000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-506000 in cluster multinode-506000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-506000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:22:40.356478    3493 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:22:40.356597    3493 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:22:40.356599    3493 out.go:309] Setting ErrFile to fd 2...
	I0615 10:22:40.356602    3493 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:22:40.356668    3493 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:22:40.357763    3493 out.go:303] Setting JSON to false
	I0615 10:22:40.372835    3493 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3131,"bootTime":1686846629,"procs":438,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:22:40.372906    3493 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:22:40.376848    3493 out.go:177] * [multinode-506000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:22:40.384736    3493 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:22:40.384814    3493 notify.go:220] Checking for updates...
	I0615 10:22:40.388814    3493 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:22:40.391881    3493 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:22:40.394811    3493 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:22:40.397856    3493 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:22:40.400838    3493 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:22:40.403988    3493 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:22:40.407748    3493 out.go:177] * Using the qemu2 driver based on user configuration
	I0615 10:22:40.414830    3493 start.go:297] selected driver: qemu2
	I0615 10:22:40.414836    3493 start.go:884] validating driver "qemu2" against <nil>
	I0615 10:22:40.414845    3493 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:22:40.416691    3493 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0615 10:22:40.419806    3493 out.go:177] * Automatically selected the socket_vmnet network
	I0615 10:22:40.422908    3493 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 10:22:40.422927    3493 cni.go:84] Creating CNI manager for ""
	I0615 10:22:40.422931    3493 cni.go:137] 0 nodes found, recommending kindnet
	I0615 10:22:40.422936    3493 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0615 10:22:40.422944    3493 start_flags.go:319] config:
	{Name:multinode-506000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-506000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:22:40.423032    3493 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:22:40.430815    3493 out.go:177] * Starting control plane node multinode-506000 in cluster multinode-506000
	I0615 10:22:40.434826    3493 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 10:22:40.434853    3493 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 10:22:40.434861    3493 cache.go:57] Caching tarball of preloaded images
	I0615 10:22:40.434911    3493 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 10:22:40.434916    3493 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 10:22:40.435110    3493 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/multinode-506000/config.json ...
	I0615 10:22:40.435122    3493 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/multinode-506000/config.json: {Name:mk5bc05f0de5febf75a6ff2e3b44637cd2ba2372 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:22:40.435324    3493 start.go:365] acquiring machines lock for multinode-506000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:22:40.435354    3493 start.go:369] acquired machines lock for "multinode-506000" in 24.917µs
	I0615 10:22:40.435364    3493 start.go:93] Provisioning new machine with config: &{Name:multinode-506000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-506000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:22:40.435401    3493 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:22:40.443626    3493 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0615 10:22:40.459660    3493 start.go:159] libmachine.API.Create for "multinode-506000" (driver="qemu2")
	I0615 10:22:40.459682    3493 client.go:168] LocalClient.Create starting
	I0615 10:22:40.459916    3493 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:22:40.459939    3493 main.go:141] libmachine: Decoding PEM data...
	I0615 10:22:40.459958    3493 main.go:141] libmachine: Parsing certificate...
	I0615 10:22:40.460058    3493 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:22:40.460098    3493 main.go:141] libmachine: Decoding PEM data...
	I0615 10:22:40.460107    3493 main.go:141] libmachine: Parsing certificate...
	I0615 10:22:40.460563    3493 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:22:40.612784    3493 main.go:141] libmachine: Creating SSH key...
	I0615 10:22:40.643464    3493 main.go:141] libmachine: Creating Disk image...
	I0615 10:22:40.643469    3493 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:22:40.643618    3493 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/disk.qcow2
	I0615 10:22:40.652167    3493 main.go:141] libmachine: STDOUT: 
	I0615 10:22:40.652184    3493 main.go:141] libmachine: STDERR: 
	I0615 10:22:40.652238    3493 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/disk.qcow2 +20000M
	I0615 10:22:40.659281    3493 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:22:40.659300    3493 main.go:141] libmachine: STDERR: 
	I0615 10:22:40.659319    3493 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/disk.qcow2
	I0615 10:22:40.659331    3493 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:22:40.659377    3493 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:3e:b2:3e:b0:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/disk.qcow2
	I0615 10:22:40.660896    3493 main.go:141] libmachine: STDOUT: 
	I0615 10:22:40.660912    3493 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:22:40.660933    3493 client.go:171] LocalClient.Create took 201.248375ms
	I0615 10:22:42.663061    3493 start.go:128] duration metric: createHost completed in 2.227678959s
	I0615 10:22:42.663135    3493 start.go:83] releasing machines lock for "multinode-506000", held for 2.227807625s
	W0615 10:22:42.663241    3493 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:22:42.675593    3493 out.go:177] * Deleting "multinode-506000" in qemu2 ...
	W0615 10:22:42.696170    3493 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:22:42.696224    3493 start.go:687] Will try again in 5 seconds ...
	I0615 10:22:47.698355    3493 start.go:365] acquiring machines lock for multinode-506000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:22:47.698888    3493 start.go:369] acquired machines lock for "multinode-506000" in 428.958µs
	I0615 10:22:47.699010    3493 start.go:93] Provisioning new machine with config: &{Name:multinode-506000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-506000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:22:47.699256    3493 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:22:47.707988    3493 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0615 10:22:47.755262    3493 start.go:159] libmachine.API.Create for "multinode-506000" (driver="qemu2")
	I0615 10:22:47.755301    3493 client.go:168] LocalClient.Create starting
	I0615 10:22:47.755408    3493 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:22:47.755442    3493 main.go:141] libmachine: Decoding PEM data...
	I0615 10:22:47.755461    3493 main.go:141] libmachine: Parsing certificate...
	I0615 10:22:47.755530    3493 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:22:47.755557    3493 main.go:141] libmachine: Decoding PEM data...
	I0615 10:22:47.755573    3493 main.go:141] libmachine: Parsing certificate...
	I0615 10:22:47.756081    3493 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:22:47.881622    3493 main.go:141] libmachine: Creating SSH key...
	I0615 10:22:47.972782    3493 main.go:141] libmachine: Creating Disk image...
	I0615 10:22:47.972787    3493 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:22:47.972945    3493 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/disk.qcow2
	I0615 10:22:47.981744    3493 main.go:141] libmachine: STDOUT: 
	I0615 10:22:47.981759    3493 main.go:141] libmachine: STDERR: 
	I0615 10:22:47.981819    3493 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/disk.qcow2 +20000M
	I0615 10:22:47.988903    3493 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:22:47.988916    3493 main.go:141] libmachine: STDERR: 
	I0615 10:22:47.988925    3493 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/disk.qcow2
	I0615 10:22:47.988930    3493 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:22:47.988973    3493 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:e6:dd:b8:32:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/disk.qcow2
	I0615 10:22:47.990479    3493 main.go:141] libmachine: STDOUT: 
	I0615 10:22:47.990492    3493 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:22:47.990503    3493 client.go:171] LocalClient.Create took 235.2025ms
	I0615 10:22:49.992638    3493 start.go:128] duration metric: createHost completed in 2.293370958s
	I0615 10:22:49.992767    3493 start.go:83] releasing machines lock for "multinode-506000", held for 2.293833292s
	W0615 10:22:49.993241    3493 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-506000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-506000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:22:50.001744    3493 out.go:177] 
	W0615 10:22:50.006817    3493 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:22:50.006904    3493 out.go:239] * 
	* 
	W0615 10:22:50.009311    3493 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 10:22:50.017672    3493 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:87: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-506000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-506000 -n multinode-506000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-506000 -n multinode-506000: exit status 7 (66.255333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-506000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (113.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-506000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:481: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-506000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (111.924792ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-506000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:483: failed to create busybox deployment to multinode cluster
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-506000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-506000 -- rollout status deployment/busybox: exit status 1 (54.759791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-506000"

                                                
                                                
** /stderr **
multinode_test.go:488: failed to deploy busybox to multinode cluster
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-506000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-506000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (54.215416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-506000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-506000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-506000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.015208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-506000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-506000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-506000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.909333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-506000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-506000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-506000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.4405ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-506000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-506000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-506000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.451125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-506000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-506000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-506000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.046791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-506000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-506000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-506000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.945042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-506000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-506000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-506000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.438458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-506000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-506000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-506000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.38525ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-506000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0615 10:24:01.136480    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/functional-822000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-506000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-506000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.446208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-506000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0615 10:24:39.307444    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/client.crt: no such file or directory
E0615 10:24:39.313514    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/client.crt: no such file or directory
E0615 10:24:39.325057    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/client.crt: no such file or directory
E0615 10:24:39.347129    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/client.crt: no such file or directory
E0615 10:24:39.389234    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/client.crt: no such file or directory
E0615 10:24:39.471319    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/client.crt: no such file or directory
E0615 10:24:39.633542    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/client.crt: no such file or directory
E0615 10:24:39.955851    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/client.crt: no such file or directory
E0615 10:24:40.598205    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/client.crt: no such file or directory
E0615 10:24:41.880544    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-506000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-506000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.317375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-506000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:512: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-506000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:516: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-506000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (53.614042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-506000"

                                                
                                                
** /stderr **
multinode_test.go:518: failed get Pod names
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-506000 -- exec  -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-506000 -- exec  -- nslookup kubernetes.io: exit status 1 (53.934958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-506000"

                                                
                                                
** /stderr **
multinode_test.go:526: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-506000 -- exec  -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-506000 -- exec  -- nslookup kubernetes.default: exit status 1 (54.66425ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-506000"

                                                
                                                
** /stderr **
multinode_test.go:536: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-506000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-506000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (54.108375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-506000"

                                                
                                                
** /stderr **
multinode_test.go:544: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-506000 -n multinode-506000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-506000 -n multinode-506000: exit status 7 (29.049834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-506000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (113.74s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-506000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-506000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (54.080333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-506000"

                                                
                                                
** /stderr **
multinode_test.go:554: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-506000 -n multinode-506000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-506000 -n multinode-506000: exit status 7 (29.238334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-506000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-506000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-506000 -v 3 --alsologtostderr: exit status 89 (39.4605ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-506000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:24:43.947700    3580 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:24:43.947921    3580 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:24:43.947924    3580 out.go:309] Setting ErrFile to fd 2...
	I0615 10:24:43.947926    3580 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:24:43.947993    3580 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:24:43.948212    3580 mustload.go:65] Loading cluster: multinode-506000
	I0615 10:24:43.948372    3580 config.go:182] Loaded profile config "multinode-506000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:24:43.953034    3580 out.go:177] * The control plane node must be running for this command
	I0615 10:24:43.956125    3580 out.go:177]   To start a cluster, run: "minikube start -p multinode-506000"

                                                
                                                
** /stderr **
multinode_test.go:112: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-506000 -v 3 --alsologtostderr" : exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-506000 -n multinode-506000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-506000 -n multinode-506000: exit status 7 (28.737667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-506000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:155: expected profile "multinode-506000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-506000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-506000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidd
en\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.27.3\",\"ClusterName\":\"multinode-506000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\
",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.27.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPat
h\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\"},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-506000 -n multinode-506000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-506000 -n multinode-506000: exit status 7 (29.232125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-506000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-506000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-506000 status --output json --alsologtostderr: exit status 7 (29.899458ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-506000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:24:44.117543    3590 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:24:44.117671    3590 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:24:44.117678    3590 out.go:309] Setting ErrFile to fd 2...
	I0615 10:24:44.117681    3590 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:24:44.117750    3590 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:24:44.117865    3590 out.go:303] Setting JSON to true
	I0615 10:24:44.117873    3590 mustload.go:65] Loading cluster: multinode-506000
	I0615 10:24:44.118230    3590 notify.go:220] Checking for updates...
	I0615 10:24:44.118459    3590 config.go:182] Loaded profile config "multinode-506000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:24:44.118471    3590 status.go:255] checking status of multinode-506000 ...
	I0615 10:24:44.118998    3590 status.go:330] multinode-506000 host status = "Stopped" (err=<nil>)
	I0615 10:24:44.119003    3590 status.go:343] host is not running, skipping remaining checks
	I0615 10:24:44.119005    3590 status.go:257] multinode-506000 status: &{Name:multinode-506000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:180: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-506000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-506000 -n multinode-506000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-506000 -n multinode-506000: exit status 7 (28.976042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-506000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-506000 node stop m03
multinode_test.go:210: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-506000 node stop m03: exit status 85 (45.789833ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:212: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-506000 node stop m03": exit status 85
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-506000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-506000 status: exit status 7 (29.010458ms)

                                                
                                                
-- stdout --
	multinode-506000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-506000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-506000 status --alsologtostderr: exit status 7 (28.857459ms)

                                                
                                                
-- stdout --
	multinode-506000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:24:44.251817    3598 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:24:44.251949    3598 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:24:44.251952    3598 out.go:309] Setting ErrFile to fd 2...
	I0615 10:24:44.251954    3598 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:24:44.252028    3598 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:24:44.252149    3598 out.go:303] Setting JSON to false
	I0615 10:24:44.252163    3598 mustload.go:65] Loading cluster: multinode-506000
	I0615 10:24:44.252206    3598 notify.go:220] Checking for updates...
	I0615 10:24:44.252337    3598 config.go:182] Loaded profile config "multinode-506000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:24:44.252343    3598 status.go:255] checking status of multinode-506000 ...
	I0615 10:24:44.252547    3598 status.go:330] multinode-506000 host status = "Stopped" (err=<nil>)
	I0615 10:24:44.252551    3598 status.go:343] host is not running, skipping remaining checks
	I0615 10:24:44.252555    3598 status.go:257] multinode-506000 status: &{Name:multinode-506000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-506000 status --alsologtostderr": multinode-506000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-506000 -n multinode-506000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-506000 -n multinode-506000: exit status 7 (28.331417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-506000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-506000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-506000 node start m03 --alsologtostderr: exit status 85 (44.315ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:24:44.309039    3602 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:24:44.309212    3602 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:24:44.309215    3602 out.go:309] Setting ErrFile to fd 2...
	I0615 10:24:44.309217    3602 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:24:44.309286    3602 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:24:44.309504    3602 mustload.go:65] Loading cluster: multinode-506000
	I0615 10:24:44.309672    3602 config.go:182] Loaded profile config "multinode-506000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:24:44.313623    3602 out.go:177] 
	W0615 10:24:44.316545    3602 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0615 10:24:44.316549    3602 out.go:239] * 
	* 
	W0615 10:24:44.318047    3602 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 10:24:44.321531    3602 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I0615 10:24:44.309039    3602 out.go:296] Setting OutFile to fd 1 ...
I0615 10:24:44.309212    3602 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0615 10:24:44.309215    3602 out.go:309] Setting ErrFile to fd 2...
I0615 10:24:44.309217    3602 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0615 10:24:44.309286    3602 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
I0615 10:24:44.309504    3602 mustload.go:65] Loading cluster: multinode-506000
I0615 10:24:44.309672    3602 config.go:182] Loaded profile config "multinode-506000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0615 10:24:44.313623    3602 out.go:177] 
W0615 10:24:44.316545    3602 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0615 10:24:44.316549    3602 out.go:239] * 
* 
W0615 10:24:44.318047    3602 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0615 10:24:44.321531    3602 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-506000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-506000 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-506000 status: exit status 7 (28.699834ms)

                                                
                                                
-- stdout --
	multinode-506000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-506000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-506000 -n multinode-506000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-506000 -n multinode-506000: exit status 7 (28.573209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-506000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (5.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-506000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-506000
E0615 10:24:44.442736    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/client.crt: no such file or directory
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-506000 --wait=true -v=8 --alsologtostderr
E0615 10:24:49.565176    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/client.crt: no such file or directory
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-506000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.172383208s)

                                                
                                                
-- stdout --
	* [multinode-506000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-506000 in cluster multinode-506000
	* Restarting existing qemu2 VM for "multinode-506000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-506000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:24:44.498022    3612 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:24:44.498142    3612 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:24:44.498145    3612 out.go:309] Setting ErrFile to fd 2...
	I0615 10:24:44.498147    3612 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:24:44.498215    3612 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:24:44.499188    3612 out.go:303] Setting JSON to false
	I0615 10:24:44.514224    3612 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3255,"bootTime":1686846629,"procs":438,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:24:44.514295    3612 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:24:44.519508    3612 out.go:177] * [multinode-506000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:24:44.526530    3612 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:24:44.530532    3612 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:24:44.526587    3612 notify.go:220] Checking for updates...
	I0615 10:24:44.534586    3612 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:24:44.537562    3612 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:24:44.541485    3612 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:24:44.542939    3612 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:24:44.546822    3612 config.go:182] Loaded profile config "multinode-506000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:24:44.546868    3612 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:24:44.551494    3612 out.go:177] * Using the qemu2 driver based on existing profile
	I0615 10:24:44.556508    3612 start.go:297] selected driver: qemu2
	I0615 10:24:44.556512    3612 start.go:884] validating driver "qemu2" against &{Name:multinode-506000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.3 ClusterName:multinode-506000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:24:44.556570    3612 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:24:44.558441    3612 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 10:24:44.558463    3612 cni.go:84] Creating CNI manager for ""
	I0615 10:24:44.558466    3612 cni.go:137] 1 nodes found, recommending kindnet
	I0615 10:24:44.558472    3612 start_flags.go:319] config:
	{Name:multinode-506000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-506000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:24:44.558564    3612 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:24:44.565536    3612 out.go:177] * Starting control plane node multinode-506000 in cluster multinode-506000
	I0615 10:24:44.569520    3612 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 10:24:44.569544    3612 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 10:24:44.569556    3612 cache.go:57] Caching tarball of preloaded images
	I0615 10:24:44.569611    3612 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 10:24:44.569616    3612 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 10:24:44.569682    3612 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/multinode-506000/config.json ...
	I0615 10:24:44.570048    3612 start.go:365] acquiring machines lock for multinode-506000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:24:44.570078    3612 start.go:369] acquired machines lock for "multinode-506000" in 24.334µs
	I0615 10:24:44.570087    3612 start.go:96] Skipping create...Using existing machine configuration
	I0615 10:24:44.570092    3612 fix.go:54] fixHost starting: 
	I0615 10:24:44.570205    3612 fix.go:102] recreateIfNeeded on multinode-506000: state=Stopped err=<nil>
	W0615 10:24:44.570213    3612 fix.go:128] unexpected machine state, will restart: <nil>
	I0615 10:24:44.578526    3612 out.go:177] * Restarting existing qemu2 VM for "multinode-506000" ...
	I0615 10:24:44.582542    3612 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:e6:dd:b8:32:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/disk.qcow2
	I0615 10:24:44.584371    3612 main.go:141] libmachine: STDOUT: 
	I0615 10:24:44.584389    3612 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:24:44.584420    3612 fix.go:56] fixHost completed within 14.328917ms
	I0615 10:24:44.584425    3612 start.go:83] releasing machines lock for "multinode-506000", held for 14.343416ms
	W0615 10:24:44.584432    3612 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:24:44.584467    3612 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:24:44.584472    3612 start.go:687] Will try again in 5 seconds ...
	I0615 10:24:49.586638    3612 start.go:365] acquiring machines lock for multinode-506000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:24:49.587076    3612 start.go:369] acquired machines lock for "multinode-506000" in 344.292µs
	I0615 10:24:49.587191    3612 start.go:96] Skipping create...Using existing machine configuration
	I0615 10:24:49.587210    3612 fix.go:54] fixHost starting: 
	I0615 10:24:49.587953    3612 fix.go:102] recreateIfNeeded on multinode-506000: state=Stopped err=<nil>
	W0615 10:24:49.587981    3612 fix.go:128] unexpected machine state, will restart: <nil>
	I0615 10:24:49.592679    3612 out.go:177] * Restarting existing qemu2 VM for "multinode-506000" ...
	I0615 10:24:49.600816    3612 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:e6:dd:b8:32:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/disk.qcow2
	I0615 10:24:49.610227    3612 main.go:141] libmachine: STDOUT: 
	I0615 10:24:49.610275    3612 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:24:49.610364    3612 fix.go:56] fixHost completed within 23.156125ms
	I0615 10:24:49.610381    3612 start.go:83] releasing machines lock for "multinode-506000", held for 23.281125ms
	W0615 10:24:49.610596    3612 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-506000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-506000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:24:49.617605    3612 out.go:177] 
	W0615 10:24:49.620767    3612 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:24:49.620817    3612 out.go:239] * 
	* 
	W0615 10:24:49.623527    3612 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 10:24:49.631717    3612 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-506000" : exit status 80
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-506000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-506000 -n multinode-506000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-506000 -n multinode-506000: exit status 7 (32.299084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-506000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (5.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-506000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-506000 node delete m03: exit status 89 (37.956625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-506000"

                                                
                                                
-- /stdout --
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-506000 node delete m03": exit status 89
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-506000 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-506000 status --alsologtostderr: exit status 7 (28.562167ms)

                                                
                                                
-- stdout --
	multinode-506000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:24:49.810292    3625 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:24:49.810414    3625 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:24:49.810418    3625 out.go:309] Setting ErrFile to fd 2...
	I0615 10:24:49.810420    3625 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:24:49.810484    3625 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:24:49.810602    3625 out.go:303] Setting JSON to false
	I0615 10:24:49.810611    3625 mustload.go:65] Loading cluster: multinode-506000
	I0615 10:24:49.810667    3625 notify.go:220] Checking for updates...
	I0615 10:24:49.810794    3625 config.go:182] Loaded profile config "multinode-506000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:24:49.810802    3625 status.go:255] checking status of multinode-506000 ...
	I0615 10:24:49.810990    3625 status.go:330] multinode-506000 host status = "Stopped" (err=<nil>)
	I0615 10:24:49.810994    3625 status.go:343] host is not running, skipping remaining checks
	I0615 10:24:49.810996    3625 status.go:257] multinode-506000 status: &{Name:multinode-506000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-506000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-506000 -n multinode-506000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-506000 -n multinode-506000: exit status 7 (28.379ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-506000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-506000 stop
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-506000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-506000 status: exit status 7 (29.643917ms)

                                                
                                                
-- stdout --
	multinode-506000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-506000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-506000 status --alsologtostderr: exit status 7 (28.653917ms)

                                                
                                                
-- stdout --
	multinode-506000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:24:49.953362    3633 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:24:49.953483    3633 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:24:49.953485    3633 out.go:309] Setting ErrFile to fd 2...
	I0615 10:24:49.953488    3633 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:24:49.953556    3633 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:24:49.953666    3633 out.go:303] Setting JSON to false
	I0615 10:24:49.953675    3633 mustload.go:65] Loading cluster: multinode-506000
	I0615 10:24:49.953742    3633 notify.go:220] Checking for updates...
	I0615 10:24:49.953854    3633 config.go:182] Loaded profile config "multinode-506000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:24:49.953859    3633 status.go:255] checking status of multinode-506000 ...
	I0615 10:24:49.954041    3633 status.go:330] multinode-506000 host status = "Stopped" (err=<nil>)
	I0615 10:24:49.954044    3633 status.go:343] host is not running, skipping remaining checks
	I0615 10:24:49.954046    3633 status.go:257] multinode-506000 status: &{Name:multinode-506000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-506000 status --alsologtostderr": multinode-506000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-506000 status --alsologtostderr": multinode-506000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-506000 -n multinode-506000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-506000 -n multinode-506000: exit status 7 (28.522542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-506000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-506000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-506000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.172295791s)

                                                
                                                
-- stdout --
	* [multinode-506000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-506000 in cluster multinode-506000
	* Restarting existing qemu2 VM for "multinode-506000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-506000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:24:50.010500    3637 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:24:50.010604    3637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:24:50.010607    3637 out.go:309] Setting ErrFile to fd 2...
	I0615 10:24:50.010610    3637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:24:50.010679    3637 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:24:50.011612    3637 out.go:303] Setting JSON to false
	I0615 10:24:50.026859    3637 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3261,"bootTime":1686846629,"procs":438,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:24:50.026922    3637 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:24:50.031520    3637 out.go:177] * [multinode-506000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:24:50.034514    3637 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:24:50.038470    3637 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:24:50.034583    3637 notify.go:220] Checking for updates...
	I0615 10:24:50.045494    3637 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:24:50.048433    3637 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:24:50.052494    3637 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:24:50.055452    3637 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:24:50.058711    3637 config.go:182] Loaded profile config "multinode-506000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:24:50.058965    3637 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:24:50.063462    3637 out.go:177] * Using the qemu2 driver based on existing profile
	I0615 10:24:50.070418    3637 start.go:297] selected driver: qemu2
	I0615 10:24:50.070423    3637 start.go:884] validating driver "qemu2" against &{Name:multinode-506000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.3 ClusterName:multinode-506000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:24:50.070487    3637 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:24:50.072272    3637 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 10:24:50.072297    3637 cni.go:84] Creating CNI manager for ""
	I0615 10:24:50.072300    3637 cni.go:137] 1 nodes found, recommending kindnet
	I0615 10:24:50.072306    3637 start_flags.go:319] config:
	{Name:multinode-506000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:multinode-506000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:24:50.072417    3637 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:24:50.076540    3637 out.go:177] * Starting control plane node multinode-506000 in cluster multinode-506000
	I0615 10:24:50.083427    3637 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 10:24:50.083448    3637 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 10:24:50.083462    3637 cache.go:57] Caching tarball of preloaded images
	I0615 10:24:50.083513    3637 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 10:24:50.083518    3637 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 10:24:50.083602    3637 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/multinode-506000/config.json ...
	I0615 10:24:50.083961    3637 start.go:365] acquiring machines lock for multinode-506000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:24:50.083990    3637 start.go:369] acquired machines lock for "multinode-506000" in 23.791µs
	I0615 10:24:50.084000    3637 start.go:96] Skipping create...Using existing machine configuration
	I0615 10:24:50.084005    3637 fix.go:54] fixHost starting: 
	I0615 10:24:50.084110    3637 fix.go:102] recreateIfNeeded on multinode-506000: state=Stopped err=<nil>
	W0615 10:24:50.084118    3637 fix.go:128] unexpected machine state, will restart: <nil>
	I0615 10:24:50.088309    3637 out.go:177] * Restarting existing qemu2 VM for "multinode-506000" ...
	I0615 10:24:50.092478    3637 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:e6:dd:b8:32:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/disk.qcow2
	I0615 10:24:50.094245    3637 main.go:141] libmachine: STDOUT: 
	I0615 10:24:50.094260    3637 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:24:50.094287    3637 fix.go:56] fixHost completed within 10.282667ms
	I0615 10:24:50.094292    3637 start.go:83] releasing machines lock for "multinode-506000", held for 10.297583ms
	W0615 10:24:50.094298    3637 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:24:50.094331    3637 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:24:50.094335    3637 start.go:687] Will try again in 5 seconds ...
	I0615 10:24:55.096345    3637 start.go:365] acquiring machines lock for multinode-506000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:24:55.096828    3637 start.go:369] acquired machines lock for "multinode-506000" in 411.917µs
	I0615 10:24:55.096983    3637 start.go:96] Skipping create...Using existing machine configuration
	I0615 10:24:55.097003    3637 fix.go:54] fixHost starting: 
	I0615 10:24:55.097678    3637 fix.go:102] recreateIfNeeded on multinode-506000: state=Stopped err=<nil>
	W0615 10:24:55.097704    3637 fix.go:128] unexpected machine state, will restart: <nil>
	I0615 10:24:55.106040    3637 out.go:177] * Restarting existing qemu2 VM for "multinode-506000" ...
	I0615 10:24:55.110206    3637 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:e6:dd:b8:32:ec -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/multinode-506000/disk.qcow2
	I0615 10:24:55.119482    3637 main.go:141] libmachine: STDOUT: 
	I0615 10:24:55.119536    3637 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:24:55.119624    3637 fix.go:56] fixHost completed within 22.622166ms
	I0615 10:24:55.119640    3637 start.go:83] releasing machines lock for "multinode-506000", held for 22.791542ms
	W0615 10:24:55.119835    3637 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-506000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-506000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:24:55.127096    3637 out.go:177] 
	W0615 10:24:55.131104    3637 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:24:55.131141    3637 out.go:239] * 
	* 
	W0615 10:24:55.134629    3637 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 10:24:55.143050    3637 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-506000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-506000 -n multinode-506000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-506000 -n multinode-506000: exit status 7 (68.251959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-506000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.24s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-506000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-506000-m01 --driver=qemu2 
E0615 10:24:59.807661    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/client.crt: no such file or directory
E0615 10:25:00.476369    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt: no such file or directory
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-506000-m01 --driver=qemu2 : exit status 80 (9.886288042s)

                                                
                                                
-- stdout --
	* [multinode-506000-m01] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-506000-m01 in cluster multinode-506000-m01
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-506000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-506000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-506000-m02 --driver=qemu2 
multinode_test.go:460: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-506000-m02 --driver=qemu2 : exit status 80 (9.894285792s)

                                                
                                                
-- stdout --
	* [multinode-506000-m02] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-506000-m02 in cluster multinode-506000-m02
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-506000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-506000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:462: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-506000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-506000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-506000: exit status 89 (74.516291ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-506000"

                                                
                                                
-- /stdout --
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-506000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-506000 -n multinode-506000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-506000 -n multinode-506000: exit status 7 (29.195042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-506000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.01s)

                                                
                                    
x
+
TestPreload (10.03s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-777000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
E0615 10:25:20.288748    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/client.crt: no such file or directory
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-777000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.858950459s)

                                                
                                                
-- stdout --
	* [test-preload-777000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node test-preload-777000 in cluster test-preload-777000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-777000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:25:15.386310    3690 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:25:15.386444    3690 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:25:15.386446    3690 out.go:309] Setting ErrFile to fd 2...
	I0615 10:25:15.386449    3690 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:25:15.386517    3690 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:25:15.387550    3690 out.go:303] Setting JSON to false
	I0615 10:25:15.402869    3690 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3286,"bootTime":1686846629,"procs":438,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:25:15.402949    3690 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:25:15.407928    3690 out.go:177] * [test-preload-777000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:25:15.415845    3690 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:25:15.415911    3690 notify.go:220] Checking for updates...
	I0615 10:25:15.422809    3690 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:25:15.425822    3690 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:25:15.428830    3690 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:25:15.431773    3690 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:25:15.434852    3690 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:25:15.438269    3690 config.go:182] Loaded profile config "multinode-506000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:25:15.438311    3690 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:25:15.442761    3690 out.go:177] * Using the qemu2 driver based on user configuration
	I0615 10:25:15.449871    3690 start.go:297] selected driver: qemu2
	I0615 10:25:15.449876    3690 start.go:884] validating driver "qemu2" against <nil>
	I0615 10:25:15.449895    3690 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:25:15.451845    3690 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0615 10:25:15.454743    3690 out.go:177] * Automatically selected the socket_vmnet network
	I0615 10:25:15.457864    3690 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 10:25:15.457881    3690 cni.go:84] Creating CNI manager for ""
	I0615 10:25:15.457886    3690 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 10:25:15.457889    3690 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0615 10:25:15.457899    3690 start_flags.go:319] config:
	{Name:test-preload-777000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-777000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:25:15.457976    3690 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:25:15.459805    3690 out.go:177] * Starting control plane node test-preload-777000 in cluster test-preload-777000
	I0615 10:25:15.467793    3690 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0615 10:25:15.467860    3690 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/test-preload-777000/config.json ...
	I0615 10:25:15.467875    3690 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/test-preload-777000/config.json: {Name:mka723931553137acb3302955b5939c478aabc5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:25:15.467907    3690 cache.go:107] acquiring lock: {Name:mk9739938845b55b56d95ba5c485643bb258d975 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:25:15.467904    3690 cache.go:107] acquiring lock: {Name:mkb251ff5edae426ab2aa5dafd3340c322e8c0bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:25:15.467928    3690 cache.go:107] acquiring lock: {Name:mk4832f85c8f9507277e8169f72f9e359bbeebd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:25:15.467937    3690 cache.go:107] acquiring lock: {Name:mk799235034ba04922e41b2ea2ecea162a3b19a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:25:15.467975    3690 cache.go:107] acquiring lock: {Name:mkfd79dadb42eeba74389e90d7e62a0f1c4004a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:25:15.468077    3690 cache.go:107] acquiring lock: {Name:mkeb96064eaffba3f88312c80cd916c1dbaf41c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:25:15.468142    3690 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0615 10:25:15.468161    3690 start.go:365] acquiring machines lock for test-preload-777000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:25:15.468174    3690 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0615 10:25:15.468198    3690 start.go:369] acquired machines lock for "test-preload-777000" in 32.125µs
	I0615 10:25:15.468217    3690 start.go:93] Provisioning new machine with config: &{Name:test-preload-777000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-777000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:25:15.468248    3690 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:25:15.468261    3690 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0615 10:25:15.468267    3690 cache.go:107] acquiring lock: {Name:mk3b87143609c9025daf34ab811c623507da1594 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:25:15.472829    3690 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0615 10:25:15.468318    3690 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0615 10:25:15.468323    3690 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0615 10:25:15.468342    3690 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0615 10:25:15.468390    3690 cache.go:107] acquiring lock: {Name:mkb619c018f3e43403650f1db1964313cc18c5bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:25:15.473454    3690 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0615 10:25:15.473538    3690 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0615 10:25:15.481257    3690 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0615 10:25:15.481328    3690 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0615 10:25:15.481350    3690 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0615 10:25:15.484301    3690 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0615 10:25:15.484338    3690 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0615 10:25:15.484383    3690 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0615 10:25:15.484383    3690 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0615 10:25:15.484407    3690 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0615 10:25:15.490675    3690 start.go:159] libmachine.API.Create for "test-preload-777000" (driver="qemu2")
	I0615 10:25:15.490693    3690 client.go:168] LocalClient.Create starting
	I0615 10:25:15.490783    3690 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:25:15.490810    3690 main.go:141] libmachine: Decoding PEM data...
	I0615 10:25:15.490818    3690 main.go:141] libmachine: Parsing certificate...
	I0615 10:25:15.490864    3690 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:25:15.490878    3690 main.go:141] libmachine: Decoding PEM data...
	I0615 10:25:15.490887    3690 main.go:141] libmachine: Parsing certificate...
	I0615 10:25:15.491198    3690 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:25:15.699534    3690 main.go:141] libmachine: Creating SSH key...
	I0615 10:25:15.754441    3690 main.go:141] libmachine: Creating Disk image...
	I0615 10:25:15.754450    3690 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:25:15.754616    3690 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/test-preload-777000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/test-preload-777000/disk.qcow2
	I0615 10:25:15.763360    3690 main.go:141] libmachine: STDOUT: 
	I0615 10:25:15.763375    3690 main.go:141] libmachine: STDERR: 
	I0615 10:25:15.763432    3690 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/test-preload-777000/disk.qcow2 +20000M
	I0615 10:25:15.771406    3690 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:25:15.771423    3690 main.go:141] libmachine: STDERR: 
	I0615 10:25:15.771447    3690 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/test-preload-777000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/test-preload-777000/disk.qcow2
	I0615 10:25:15.771451    3690 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:25:15.771502    3690 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/test-preload-777000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/test-preload-777000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/test-preload-777000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:61:cb:ae:66:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/test-preload-777000/disk.qcow2
	I0615 10:25:15.773122    3690 main.go:141] libmachine: STDOUT: 
	I0615 10:25:15.773137    3690 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:25:15.773155    3690 client.go:171] LocalClient.Create took 282.461375ms
	I0615 10:25:16.654060    3690 cache.go:162] opening:  /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0615 10:25:16.858616    3690 cache.go:162] opening:  /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0615 10:25:17.070814    3690 cache.go:162] opening:  /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0615 10:25:17.109245    3690 cache.go:162] opening:  /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	W0615 10:25:17.115439    3690 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0615 10:25:17.115456    3690 cache.go:162] opening:  /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0615 10:25:17.252529    3690 cache.go:162] opening:  /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0615 10:25:17.389875    3690 cache.go:157] /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0615 10:25:17.389892    3690 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.921947167s
	I0615 10:25:17.389905    3690 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0615 10:25:17.420482    3690 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0615 10:25:17.420517    3690 cache.go:162] opening:  /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0615 10:25:17.620307    3690 cache.go:162] opening:  /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0615 10:25:17.773534    3690 start.go:128] duration metric: createHost completed in 2.305270417s
	I0615 10:25:17.773580    3690 start.go:83] releasing machines lock for "test-preload-777000", held for 2.305409584s
	W0615 10:25:17.773637    3690 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:25:17.782762    3690 out.go:177] * Deleting "test-preload-777000" in qemu2 ...
	W0615 10:25:17.806951    3690 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:25:17.807002    3690 start.go:687] Will try again in 5 seconds ...
	I0615 10:25:17.930481    3690 cache.go:157] /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0615 10:25:17.930534    3690 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.462669458s
	I0615 10:25:17.930561    3690 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0615 10:25:19.114426    3690 cache.go:157] /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0615 10:25:19.114476    3690 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.646263833s
	I0615 10:25:19.114544    3690 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0615 10:25:20.152264    3690 cache.go:157] /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0615 10:25:20.152309    3690 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.684489833s
	I0615 10:25:20.152368    3690 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0615 10:25:20.617559    3690 cache.go:157] /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0615 10:25:20.617607    3690 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 5.149617959s
	I0615 10:25:20.617634    3690 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0615 10:25:20.743571    3690 cache.go:157] /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0615 10:25:20.743625    3690 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 5.275797292s
	I0615 10:25:20.743664    3690 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0615 10:25:21.648230    3690 cache.go:157] /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0615 10:25:21.648273    3690 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.180430917s
	I0615 10:25:21.648297    3690 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0615 10:25:22.807264    3690 start.go:365] acquiring machines lock for test-preload-777000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:25:22.807797    3690 start.go:369] acquired machines lock for "test-preload-777000" in 461.167µs
	I0615 10:25:22.807915    3690 start.go:93] Provisioning new machine with config: &{Name:test-preload-777000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-777000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:25:22.808226    3690 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:25:22.815755    3690 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0615 10:25:22.864051    3690 start.go:159] libmachine.API.Create for "test-preload-777000" (driver="qemu2")
	I0615 10:25:22.864088    3690 client.go:168] LocalClient.Create starting
	I0615 10:25:22.864215    3690 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:25:22.864253    3690 main.go:141] libmachine: Decoding PEM data...
	I0615 10:25:22.864275    3690 main.go:141] libmachine: Parsing certificate...
	I0615 10:25:22.864363    3690 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:25:22.864394    3690 main.go:141] libmachine: Decoding PEM data...
	I0615 10:25:22.864414    3690 main.go:141] libmachine: Parsing certificate...
	I0615 10:25:22.864946    3690 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:25:23.054047    3690 main.go:141] libmachine: Creating SSH key...
	I0615 10:25:23.159970    3690 main.go:141] libmachine: Creating Disk image...
	I0615 10:25:23.159976    3690 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:25:23.160124    3690 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/test-preload-777000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/test-preload-777000/disk.qcow2
	I0615 10:25:23.168758    3690 main.go:141] libmachine: STDOUT: 
	I0615 10:25:23.168783    3690 main.go:141] libmachine: STDERR: 
	I0615 10:25:23.168849    3690 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/test-preload-777000/disk.qcow2 +20000M
	I0615 10:25:23.176261    3690 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:25:23.176287    3690 main.go:141] libmachine: STDERR: 
	I0615 10:25:23.176300    3690 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/test-preload-777000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/test-preload-777000/disk.qcow2
	I0615 10:25:23.176306    3690 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:25:23.176350    3690 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/test-preload-777000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/test-preload-777000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/test-preload-777000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:17:46:c5:e4:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/test-preload-777000/disk.qcow2
	I0615 10:25:23.178031    3690 main.go:141] libmachine: STDOUT: 
	I0615 10:25:23.178051    3690 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:25:23.178069    3690 client.go:171] LocalClient.Create took 313.981792ms
	I0615 10:25:24.452535    3690 cache.go:157] /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0615 10:25:24.452625    3690 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.984380375s
	I0615 10:25:24.452662    3690 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0615 10:25:24.452702    3690 cache.go:87] Successfully saved all images to host disk.
	I0615 10:25:25.180225    3690 start.go:128] duration metric: createHost completed in 2.372009667s
	I0615 10:25:25.180294    3690 start.go:83] releasing machines lock for "test-preload-777000", held for 2.372511416s
	W0615 10:25:25.180600    3690 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-777000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-777000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:25:25.189209    3690 out.go:177] 
	W0615 10:25:25.193242    3690 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:25:25.193286    3690 out.go:239] * 
	* 
	W0615 10:25:25.195747    3690 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 10:25:25.204171    3690 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-777000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:522: *** TestPreload FAILED at 2023-06-15 10:25:25.220548 -0700 PDT m=+3179.556933418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-777000 -n test-preload-777000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-777000 -n test-preload-777000: exit status 7 (69.736917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-777000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-777000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-777000
--- FAIL: TestPreload (10.03s)

                                                
                                    
x
+
TestScheduledStopUnix (9.84s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-157000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-157000 --memory=2048 --driver=qemu2 : exit status 80 (9.670566959s)

                                                
                                                
-- stdout --
	* [scheduled-stop-157000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-157000 in cluster scheduled-stop-157000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-157000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-157000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-157000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-157000 in cluster scheduled-stop-157000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-157000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-157000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestScheduledStopUnix FAILED at 2023-06-15 10:25:35.056337 -0700 PDT m=+3189.392880834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-157000 -n scheduled-stop-157000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-157000 -n scheduled-stop-157000: exit status 7 (67.381459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-157000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-157000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-157000
--- FAIL: TestScheduledStopUnix (9.84s)

                                                
                                    
x
+
TestSkaffold (16.96s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe840658980 version
skaffold_test.go:63: skaffold version: v2.5.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-637000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-637000 --memory=2600 --driver=qemu2 : exit status 80 (9.851588542s)

                                                
                                                
-- stdout --
	* [skaffold-637000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-637000 in cluster skaffold-637000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-637000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-637000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-637000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-637000 in cluster skaffold-637000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-637000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-637000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestSkaffold FAILED at 2023-06-15 10:25:52.021831 -0700 PDT m=+3206.358649751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-637000 -n skaffold-637000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-637000 -n skaffold-637000: exit status 7 (61.720167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-637000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-637000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-637000
--- FAIL: TestSkaffold (16.96s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (158.49s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
E0615 10:26:44.976357    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/functional-822000/client.crt: no such file or directory
E0615 10:27:23.171755    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/client.crt: no such file or directory
version_upgrade_test.go:106: v1.6.2 release installation failed: bad response code: 404
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-06-15 10:29:11.534759 -0700 PDT m=+3405.874808459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-841000 -n running-upgrade-841000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-841000 -n running-upgrade-841000: exit status 85 (89.898584ms)

                                                
                                                
-- stdout --
	* Profile "running-upgrade-841000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p running-upgrade-841000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "running-upgrade-841000" host is not running, skipping log retrieval (state="* Profile \"running-upgrade-841000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p running-upgrade-841000\"")
helpers_test.go:175: Cleaning up "running-upgrade-841000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-841000
--- FAIL: TestRunningBinaryUpgrade (158.49s)

                                                
                                    
x
+
TestKubernetesUpgrade (15.41s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-274000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-274000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.910232041s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-274000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubernetes-upgrade-274000 in cluster kubernetes-upgrade-274000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-274000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:29:11.889530    4172 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:29:11.889639    4172 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:29:11.889642    4172 out.go:309] Setting ErrFile to fd 2...
	I0615 10:29:11.889645    4172 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:29:11.889717    4172 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:29:11.890760    4172 out.go:303] Setting JSON to false
	I0615 10:29:11.907017    4172 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3522,"bootTime":1686846629,"procs":442,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:29:11.907079    4172 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:29:11.911401    4172 out.go:177] * [kubernetes-upgrade-274000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:29:11.918524    4172 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:29:11.918567    4172 notify.go:220] Checking for updates...
	I0615 10:29:11.921443    4172 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:29:11.924455    4172 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:29:11.927466    4172 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:29:11.928879    4172 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:29:11.932440    4172 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:29:11.935755    4172 config.go:182] Loaded profile config "cert-expiration-744000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:29:11.935816    4172 config.go:182] Loaded profile config "multinode-506000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:29:11.935855    4172 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:29:11.940300    4172 out.go:177] * Using the qemu2 driver based on user configuration
	I0615 10:29:11.947458    4172 start.go:297] selected driver: qemu2
	I0615 10:29:11.947463    4172 start.go:884] validating driver "qemu2" against <nil>
	I0615 10:29:11.947469    4172 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:29:11.949336    4172 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0615 10:29:11.952475    4172 out.go:177] * Automatically selected the socket_vmnet network
	I0615 10:29:11.955471    4172 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0615 10:29:11.955485    4172 cni.go:84] Creating CNI manager for ""
	I0615 10:29:11.955492    4172 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0615 10:29:11.955496    4172 start_flags.go:319] config:
	{Name:kubernetes-upgrade-274000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-274000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:29:11.955572    4172 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:29:11.964450    4172 out.go:177] * Starting control plane node kubernetes-upgrade-274000 in cluster kubernetes-upgrade-274000
	I0615 10:29:11.968440    4172 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0615 10:29:11.968467    4172 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0615 10:29:11.968477    4172 cache.go:57] Caching tarball of preloaded images
	I0615 10:29:11.968535    4172 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 10:29:11.968540    4172 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0615 10:29:11.968598    4172 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/kubernetes-upgrade-274000/config.json ...
	I0615 10:29:11.968610    4172 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/kubernetes-upgrade-274000/config.json: {Name:mkabfcd0442d69bf6afce85beb78eed13d2c5f70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:29:11.968807    4172 start.go:365] acquiring machines lock for kubernetes-upgrade-274000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:29:11.968839    4172 start.go:369] acquired machines lock for "kubernetes-upgrade-274000" in 24.667µs
	I0615 10:29:11.968850    4172 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-274000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-274000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:29:11.968876    4172 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:29:11.973446    4172 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0615 10:29:11.990397    4172 start.go:159] libmachine.API.Create for "kubernetes-upgrade-274000" (driver="qemu2")
	I0615 10:29:11.990421    4172 client.go:168] LocalClient.Create starting
	I0615 10:29:11.990475    4172 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:29:11.990498    4172 main.go:141] libmachine: Decoding PEM data...
	I0615 10:29:11.990508    4172 main.go:141] libmachine: Parsing certificate...
	I0615 10:29:11.990557    4172 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:29:11.990571    4172 main.go:141] libmachine: Decoding PEM data...
	I0615 10:29:11.990579    4172 main.go:141] libmachine: Parsing certificate...
	I0615 10:29:11.990879    4172 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:29:12.285308    4172 main.go:141] libmachine: Creating SSH key...
	I0615 10:29:12.322852    4172 main.go:141] libmachine: Creating Disk image...
	I0615 10:29:12.322859    4172 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:29:12.322998    4172 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubernetes-upgrade-274000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubernetes-upgrade-274000/disk.qcow2
	I0615 10:29:12.331434    4172 main.go:141] libmachine: STDOUT: 
	I0615 10:29:12.331471    4172 main.go:141] libmachine: STDERR: 
	I0615 10:29:12.331547    4172 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubernetes-upgrade-274000/disk.qcow2 +20000M
	I0615 10:29:12.338644    4172 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:29:12.338654    4172 main.go:141] libmachine: STDERR: 
	I0615 10:29:12.338674    4172 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubernetes-upgrade-274000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubernetes-upgrade-274000/disk.qcow2
	I0615 10:29:12.338679    4172 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:29:12.338717    4172 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubernetes-upgrade-274000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubernetes-upgrade-274000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubernetes-upgrade-274000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:29:d2:1e:db:45 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubernetes-upgrade-274000/disk.qcow2
	I0615 10:29:12.340209    4172 main.go:141] libmachine: STDOUT: 
	I0615 10:29:12.340219    4172 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:29:12.340237    4172 client.go:171] LocalClient.Create took 349.816334ms
	I0615 10:29:14.342370    4172 start.go:128] duration metric: createHost completed in 2.37351225s
	I0615 10:29:14.342446    4172 start.go:83] releasing machines lock for "kubernetes-upgrade-274000", held for 2.373626834s
	W0615 10:29:14.342502    4172 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:29:14.350840    4172 out.go:177] * Deleting "kubernetes-upgrade-274000" in qemu2 ...
	W0615 10:29:14.369527    4172 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:29:14.369560    4172 start.go:687] Will try again in 5 seconds ...
	I0615 10:29:19.371578    4172 start.go:365] acquiring machines lock for kubernetes-upgrade-274000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:29:19.372096    4172 start.go:369] acquired machines lock for "kubernetes-upgrade-274000" in 415.666µs
	I0615 10:29:19.372234    4172 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-274000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-274000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:29:19.372570    4172 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:29:19.382266    4172 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0615 10:29:19.430377    4172 start.go:159] libmachine.API.Create for "kubernetes-upgrade-274000" (driver="qemu2")
	I0615 10:29:19.430414    4172 client.go:168] LocalClient.Create starting
	I0615 10:29:19.430560    4172 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:29:19.430598    4172 main.go:141] libmachine: Decoding PEM data...
	I0615 10:29:19.430617    4172 main.go:141] libmachine: Parsing certificate...
	I0615 10:29:19.430693    4172 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:29:19.430720    4172 main.go:141] libmachine: Decoding PEM data...
	I0615 10:29:19.430733    4172 main.go:141] libmachine: Parsing certificate...
	I0615 10:29:19.431299    4172 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:29:19.662120    4172 main.go:141] libmachine: Creating SSH key...
	I0615 10:29:19.713441    4172 main.go:141] libmachine: Creating Disk image...
	I0615 10:29:19.713447    4172 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:29:19.713595    4172 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubernetes-upgrade-274000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubernetes-upgrade-274000/disk.qcow2
	I0615 10:29:19.722133    4172 main.go:141] libmachine: STDOUT: 
	I0615 10:29:19.722146    4172 main.go:141] libmachine: STDERR: 
	I0615 10:29:19.722202    4172 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubernetes-upgrade-274000/disk.qcow2 +20000M
	I0615 10:29:19.729304    4172 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:29:19.729313    4172 main.go:141] libmachine: STDERR: 
	I0615 10:29:19.729327    4172 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubernetes-upgrade-274000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubernetes-upgrade-274000/disk.qcow2
	I0615 10:29:19.729332    4172 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:29:19.729375    4172 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubernetes-upgrade-274000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubernetes-upgrade-274000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubernetes-upgrade-274000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:8a:fa:b4:7d:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubernetes-upgrade-274000/disk.qcow2
	I0615 10:29:19.730909    4172 main.go:141] libmachine: STDOUT: 
	I0615 10:29:19.730921    4172 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:29:19.730932    4172 client.go:171] LocalClient.Create took 300.518666ms
	I0615 10:29:21.733051    4172 start.go:128] duration metric: createHost completed in 2.36049575s
	I0615 10:29:21.733144    4172 start.go:83] releasing machines lock for "kubernetes-upgrade-274000", held for 2.361055625s
	W0615 10:29:21.733587    4172 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-274000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-274000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:29:21.743119    4172 out.go:177] 
	W0615 10:29:21.747225    4172 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:29:21.747267    4172 out.go:239] * 
	* 
	W0615 10:29:21.749868    4172 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 10:29:21.759190    4172 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:236: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-274000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-274000
version_upgrade_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-274000 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-274000 status --format={{.Host}}: exit status 7 (36.016667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-274000 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-274000 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.168113125s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-274000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-274000 in cluster kubernetes-upgrade-274000
	* Restarting existing qemu2 VM for "kubernetes-upgrade-274000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-274000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:29:21.936450    4189 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:29:21.936558    4189 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:29:21.936561    4189 out.go:309] Setting ErrFile to fd 2...
	I0615 10:29:21.936563    4189 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:29:21.936631    4189 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:29:21.937645    4189 out.go:303] Setting JSON to false
	I0615 10:29:21.953026    4189 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3532,"bootTime":1686846629,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:29:21.953096    4189 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:29:21.956400    4189 out.go:177] * [kubernetes-upgrade-274000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:29:21.963406    4189 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:29:21.967364    4189 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:29:21.963518    4189 notify.go:220] Checking for updates...
	I0615 10:29:21.973399    4189 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:29:21.976365    4189 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:29:21.979367    4189 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:29:21.982522    4189 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:29:21.984208    4189 config.go:182] Loaded profile config "kubernetes-upgrade-274000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0615 10:29:21.984442    4189 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:29:21.988345    4189 out.go:177] * Using the qemu2 driver based on existing profile
	I0615 10:29:21.995236    4189 start.go:297] selected driver: qemu2
	I0615 10:29:21.995241    4189 start.go:884] validating driver "qemu2" against &{Name:kubernetes-upgrade-274000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-274000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:29:21.995302    4189 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:29:21.997090    4189 cni.go:84] Creating CNI manager for ""
	I0615 10:29:21.997105    4189 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 10:29:21.997111    4189 start_flags.go:319] config:
	{Name:kubernetes-upgrade-274000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kubernetes-upgrade-274000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:29:21.997189    4189 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:29:22.004397    4189 out.go:177] * Starting control plane node kubernetes-upgrade-274000 in cluster kubernetes-upgrade-274000
	I0615 10:29:22.008328    4189 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 10:29:22.008357    4189 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 10:29:22.008372    4189 cache.go:57] Caching tarball of preloaded images
	I0615 10:29:22.008431    4189 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 10:29:22.008443    4189 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 10:29:22.008492    4189 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/kubernetes-upgrade-274000/config.json ...
	I0615 10:29:22.008850    4189 start.go:365] acquiring machines lock for kubernetes-upgrade-274000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:29:22.008876    4189 start.go:369] acquired machines lock for "kubernetes-upgrade-274000" in 20.875µs
	I0615 10:29:22.008888    4189 start.go:96] Skipping create...Using existing machine configuration
	I0615 10:29:22.008892    4189 fix.go:54] fixHost starting: 
	I0615 10:29:22.008998    4189 fix.go:102] recreateIfNeeded on kubernetes-upgrade-274000: state=Stopped err=<nil>
	W0615 10:29:22.009006    4189 fix.go:128] unexpected machine state, will restart: <nil>
	I0615 10:29:22.017366    4189 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-274000" ...
	I0615 10:29:22.021386    4189 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubernetes-upgrade-274000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubernetes-upgrade-274000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubernetes-upgrade-274000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:8a:fa:b4:7d:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubernetes-upgrade-274000/disk.qcow2
	I0615 10:29:22.023182    4189 main.go:141] libmachine: STDOUT: 
	I0615 10:29:22.023200    4189 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:29:22.023231    4189 fix.go:56] fixHost completed within 14.33875ms
	I0615 10:29:22.023237    4189 start.go:83] releasing machines lock for "kubernetes-upgrade-274000", held for 14.356833ms
	W0615 10:29:22.023244    4189 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:29:22.023297    4189 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:29:22.023302    4189 start.go:687] Will try again in 5 seconds ...
	I0615 10:29:27.025424    4189 start.go:365] acquiring machines lock for kubernetes-upgrade-274000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:29:27.025816    4189 start.go:369] acquired machines lock for "kubernetes-upgrade-274000" in 320.709µs
	I0615 10:29:27.025971    4189 start.go:96] Skipping create...Using existing machine configuration
	I0615 10:29:27.025992    4189 fix.go:54] fixHost starting: 
	I0615 10:29:27.026707    4189 fix.go:102] recreateIfNeeded on kubernetes-upgrade-274000: state=Stopped err=<nil>
	W0615 10:29:27.026731    4189 fix.go:128] unexpected machine state, will restart: <nil>
	I0615 10:29:27.030094    4189 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-274000" ...
	I0615 10:29:27.034232    4189 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubernetes-upgrade-274000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubernetes-upgrade-274000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubernetes-upgrade-274000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:8a:fa:b4:7d:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubernetes-upgrade-274000/disk.qcow2
	I0615 10:29:27.043471    4189 main.go:141] libmachine: STDOUT: 
	I0615 10:29:27.043521    4189 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:29:27.043598    4189 fix.go:56] fixHost completed within 17.609875ms
	I0615 10:29:27.043616    4189 start.go:83] releasing machines lock for "kubernetes-upgrade-274000", held for 17.775917ms
	W0615 10:29:27.043799    4189 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-274000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-274000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:29:27.051127    4189 out.go:177] 
	W0615 10:29:27.055133    4189 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:29:27.055181    4189 out.go:239] * 
	* 
	W0615 10:29:27.057784    4189 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 10:29:27.065088    4189 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:257: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-274000 --memory=2200 --kubernetes-version=v1.27.3 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-274000 version --output=json
version_upgrade_test.go:260: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-274000 version --output=json: exit status 1 (65.312958ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-274000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:262: error running kubectl: exit status 1
panic.go:522: *** TestKubernetesUpgrade FAILED at 2023-06-15 10:29:27.144757 -0700 PDT m=+3421.485059293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-274000 -n kubernetes-upgrade-274000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-274000 -n kubernetes-upgrade-274000: exit status 7 (32.854167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-274000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-274000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-274000
--- FAIL: TestKubernetesUpgrade (15.41s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.55s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.30.1 on darwin (arm64)
- MINIKUBE_LOCATION=16718
- KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current4242333035/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.55s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.29s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.30.1 on darwin (arm64)
- MINIKUBE_LOCATION=16718
- KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current4156410174/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (163.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
version_upgrade_test.go:167: v1.6.2 release installation failed: bad response code: 404
--- FAIL: TestStoppedBinaryUpgrade/Setup (163.46s)

                                                
                                    
x
+
TestPause/serial/Start (9.84s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-344000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
E0615 10:29:39.302592    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/client.crt: no such file or directory
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-344000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.770499917s)

                                                
                                                
-- stdout --
	* [pause-344000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node pause-344000 in cluster pause-344000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-344000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-344000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-344000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-344000 -n pause-344000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-344000 -n pause-344000: exit status 7 (68.625583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-344000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (10.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-750000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-750000 --driver=qemu2 : exit status 80 (9.982524292s)

                                                
                                                
-- stdout --
	* [NoKubernetes-750000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node NoKubernetes-750000 in cluster NoKubernetes-750000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-750000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-750000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-750000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-750000 -n NoKubernetes-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-750000 -n NoKubernetes-750000: exit status 7 (69.613791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (10.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-750000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-750000 --no-kubernetes --driver=qemu2 : exit status 80 (5.396178959s)

                                                
                                                
-- stdout --
	* [NoKubernetes-750000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-750000
	* Restarting existing qemu2 VM for "NoKubernetes-750000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-750000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-750000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-750000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-750000 -n NoKubernetes-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-750000 -n NoKubernetes-750000: exit status 7 (72.006667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-750000 --no-kubernetes --driver=qemu2 
E0615 10:30:00.471657    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-750000 --no-kubernetes --driver=qemu2 : exit status 80 (5.406327375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-750000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-750000
	* Restarting existing qemu2 VM for "NoKubernetes-750000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-750000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-750000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-750000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-750000 -n NoKubernetes-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-750000 -n NoKubernetes-750000: exit status 7 (69.0145ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-750000 --driver=qemu2 
E0615 10:30:07.009572    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-750000 --driver=qemu2 : exit status 80 (5.393714s)

                                                
                                                
-- stdout --
	* [NoKubernetes-750000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-750000
	* Restarting existing qemu2 VM for "NoKubernetes-750000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-750000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-750000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-750000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-750000 -n NoKubernetes-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-750000 -n NoKubernetes-750000: exit status 7 (68.811625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-678000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-678000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.902675583s)

                                                
                                                
-- stdout --
	* [auto-678000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node auto-678000 in cluster auto-678000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-678000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:30:10.372380    4326 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:30:10.372523    4326 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:30:10.372525    4326 out.go:309] Setting ErrFile to fd 2...
	I0615 10:30:10.372528    4326 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:30:10.372600    4326 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:30:10.373608    4326 out.go:303] Setting JSON to false
	I0615 10:30:10.388903    4326 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3581,"bootTime":1686846629,"procs":449,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:30:10.389246    4326 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:30:10.393114    4326 out.go:177] * [auto-678000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:30:10.401184    4326 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:30:10.405108    4326 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:30:10.401210    4326 notify.go:220] Checking for updates...
	I0615 10:30:10.411091    4326 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:30:10.414160    4326 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:30:10.417122    4326 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:30:10.420150    4326 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:30:10.423487    4326 config.go:182] Loaded profile config "multinode-506000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:30:10.423561    4326 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:30:10.428128    4326 out.go:177] * Using the qemu2 driver based on user configuration
	I0615 10:30:10.435085    4326 start.go:297] selected driver: qemu2
	I0615 10:30:10.435090    4326 start.go:884] validating driver "qemu2" against <nil>
	I0615 10:30:10.435098    4326 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:30:10.437122    4326 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0615 10:30:10.440138    4326 out.go:177] * Automatically selected the socket_vmnet network
	I0615 10:30:10.443227    4326 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 10:30:10.443257    4326 cni.go:84] Creating CNI manager for ""
	I0615 10:30:10.443262    4326 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 10:30:10.443266    4326 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0615 10:30:10.443272    4326 start_flags.go:319] config:
	{Name:auto-678000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:auto-678000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:30:10.443362    4326 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:30:10.451125    4326 out.go:177] * Starting control plane node auto-678000 in cluster auto-678000
	I0615 10:30:10.454968    4326 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 10:30:10.454996    4326 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 10:30:10.455008    4326 cache.go:57] Caching tarball of preloaded images
	I0615 10:30:10.455067    4326 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 10:30:10.455089    4326 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 10:30:10.455149    4326 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/auto-678000/config.json ...
	I0615 10:30:10.455160    4326 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/auto-678000/config.json: {Name:mk83d0ff46b19a55148540d99b5092ef197818e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:30:10.455360    4326 start.go:365] acquiring machines lock for auto-678000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:30:10.455390    4326 start.go:369] acquired machines lock for "auto-678000" in 24.667µs
	I0615 10:30:10.455400    4326 start.go:93] Provisioning new machine with config: &{Name:auto-678000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.27.3 ClusterName:auto-678000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:30:10.455427    4326 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:30:10.463986    4326 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0615 10:30:10.481006    4326 start.go:159] libmachine.API.Create for "auto-678000" (driver="qemu2")
	I0615 10:30:10.481040    4326 client.go:168] LocalClient.Create starting
	I0615 10:30:10.481099    4326 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:30:10.481119    4326 main.go:141] libmachine: Decoding PEM data...
	I0615 10:30:10.481130    4326 main.go:141] libmachine: Parsing certificate...
	I0615 10:30:10.481191    4326 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:30:10.481206    4326 main.go:141] libmachine: Decoding PEM data...
	I0615 10:30:10.481217    4326 main.go:141] libmachine: Parsing certificate...
	I0615 10:30:10.481579    4326 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:30:10.631410    4326 main.go:141] libmachine: Creating SSH key...
	I0615 10:30:10.907677    4326 main.go:141] libmachine: Creating Disk image...
	I0615 10:30:10.907685    4326 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:30:10.907908    4326 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/auto-678000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/auto-678000/disk.qcow2
	I0615 10:30:10.917103    4326 main.go:141] libmachine: STDOUT: 
	I0615 10:30:10.917117    4326 main.go:141] libmachine: STDERR: 
	I0615 10:30:10.917170    4326 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/auto-678000/disk.qcow2 +20000M
	I0615 10:30:10.924276    4326 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:30:10.924287    4326 main.go:141] libmachine: STDERR: 
	I0615 10:30:10.924299    4326 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/auto-678000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/auto-678000/disk.qcow2
	I0615 10:30:10.924305    4326 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:30:10.924333    4326 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/auto-678000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/auto-678000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/auto-678000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:4d:e2:a8:81:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/auto-678000/disk.qcow2
	I0615 10:30:10.925834    4326 main.go:141] libmachine: STDOUT: 
	I0615 10:30:10.925850    4326 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:30:10.925872    4326 client.go:171] LocalClient.Create took 444.827333ms
	I0615 10:30:12.928026    4326 start.go:128] duration metric: createHost completed in 2.47262225s
	I0615 10:30:12.928163    4326 start.go:83] releasing machines lock for "auto-678000", held for 2.4727655s
	W0615 10:30:12.928237    4326 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:30:12.936658    4326 out.go:177] * Deleting "auto-678000" in qemu2 ...
	W0615 10:30:12.956745    4326 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:30:12.956831    4326 start.go:687] Will try again in 5 seconds ...
	I0615 10:30:17.959033    4326 start.go:365] acquiring machines lock for auto-678000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:30:17.959807    4326 start.go:369] acquired machines lock for "auto-678000" in 677.708µs
	I0615 10:30:17.959928    4326 start.go:93] Provisioning new machine with config: &{Name:auto-678000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.27.3 ClusterName:auto-678000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:30:17.960192    4326 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:30:17.969806    4326 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0615 10:30:18.019886    4326 start.go:159] libmachine.API.Create for "auto-678000" (driver="qemu2")
	I0615 10:30:18.019931    4326 client.go:168] LocalClient.Create starting
	I0615 10:30:18.020101    4326 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:30:18.020162    4326 main.go:141] libmachine: Decoding PEM data...
	I0615 10:30:18.020188    4326 main.go:141] libmachine: Parsing certificate...
	I0615 10:30:18.020286    4326 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:30:18.020316    4326 main.go:141] libmachine: Decoding PEM data...
	I0615 10:30:18.020334    4326 main.go:141] libmachine: Parsing certificate...
	I0615 10:30:18.020899    4326 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:30:18.145589    4326 main.go:141] libmachine: Creating SSH key...
	I0615 10:30:18.194305    4326 main.go:141] libmachine: Creating Disk image...
	I0615 10:30:18.194310    4326 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:30:18.194468    4326 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/auto-678000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/auto-678000/disk.qcow2
	I0615 10:30:18.202964    4326 main.go:141] libmachine: STDOUT: 
	I0615 10:30:18.202979    4326 main.go:141] libmachine: STDERR: 
	I0615 10:30:18.203031    4326 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/auto-678000/disk.qcow2 +20000M
	I0615 10:30:18.210095    4326 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:30:18.210117    4326 main.go:141] libmachine: STDERR: 
	I0615 10:30:18.210136    4326 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/auto-678000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/auto-678000/disk.qcow2
	I0615 10:30:18.210142    4326 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:30:18.210179    4326 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/auto-678000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/auto-678000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/auto-678000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:10:ed:a5:37:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/auto-678000/disk.qcow2
	I0615 10:30:18.211731    4326 main.go:141] libmachine: STDOUT: 
	I0615 10:30:18.211745    4326 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:30:18.211757    4326 client.go:171] LocalClient.Create took 191.824125ms
	I0615 10:30:20.213888    4326 start.go:128] duration metric: createHost completed in 2.253698167s
	I0615 10:30:20.213957    4326 start.go:83] releasing machines lock for "auto-678000", held for 2.254160042s
	W0615 10:30:20.214464    4326 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-678000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-678000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:30:20.221906    4326 out.go:177] 
	W0615 10:30:20.226014    4326 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:30:20.226040    4326 out.go:239] * 
	* 
	W0615 10:30:20.228671    4326 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 10:30:20.234023    4326 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-678000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-678000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.780269208s)

                                                
                                                
-- stdout --
	* [kindnet-678000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kindnet-678000 in cluster kindnet-678000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-678000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:30:22.369351    4435 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:30:22.369490    4435 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:30:22.369492    4435 out.go:309] Setting ErrFile to fd 2...
	I0615 10:30:22.369495    4435 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:30:22.369571    4435 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:30:22.370644    4435 out.go:303] Setting JSON to false
	I0615 10:30:22.386302    4435 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3593,"bootTime":1686846629,"procs":444,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:30:22.386371    4435 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:30:22.391755    4435 out.go:177] * [kindnet-678000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:30:22.399895    4435 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:30:22.403848    4435 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:30:22.399929    4435 notify.go:220] Checking for updates...
	I0615 10:30:22.409821    4435 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:30:22.412870    4435 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:30:22.415907    4435 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:30:22.418863    4435 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:30:22.422158    4435 config.go:182] Loaded profile config "multinode-506000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:30:22.422205    4435 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:30:22.426838    4435 out.go:177] * Using the qemu2 driver based on user configuration
	I0615 10:30:22.432881    4435 start.go:297] selected driver: qemu2
	I0615 10:30:22.432886    4435 start.go:884] validating driver "qemu2" against <nil>
	I0615 10:30:22.432894    4435 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:30:22.434909    4435 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0615 10:30:22.437809    4435 out.go:177] * Automatically selected the socket_vmnet network
	I0615 10:30:22.440983    4435 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 10:30:22.441005    4435 cni.go:84] Creating CNI manager for "kindnet"
	I0615 10:30:22.441015    4435 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0615 10:30:22.441020    4435 start_flags.go:319] config:
	{Name:kindnet-678000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kindnet-678000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:30:22.441104    4435 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:30:22.444816    4435 out.go:177] * Starting control plane node kindnet-678000 in cluster kindnet-678000
	I0615 10:30:22.452895    4435 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 10:30:22.452918    4435 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 10:30:22.452968    4435 cache.go:57] Caching tarball of preloaded images
	I0615 10:30:22.453027    4435 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 10:30:22.453032    4435 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 10:30:22.453094    4435 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/kindnet-678000/config.json ...
	I0615 10:30:22.453106    4435 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/kindnet-678000/config.json: {Name:mk01928a3eac708ba8d7179205b5daa6afb66e36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:30:22.453307    4435 start.go:365] acquiring machines lock for kindnet-678000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:30:22.453338    4435 start.go:369] acquired machines lock for "kindnet-678000" in 24.875µs
	I0615 10:30:22.453352    4435 start.go:93] Provisioning new machine with config: &{Name:kindnet-678000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:kindnet-678000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:30:22.453384    4435 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:30:22.461889    4435 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0615 10:30:22.478568    4435 start.go:159] libmachine.API.Create for "kindnet-678000" (driver="qemu2")
	I0615 10:30:22.478593    4435 client.go:168] LocalClient.Create starting
	I0615 10:30:22.478672    4435 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:30:22.478693    4435 main.go:141] libmachine: Decoding PEM data...
	I0615 10:30:22.478704    4435 main.go:141] libmachine: Parsing certificate...
	I0615 10:30:22.478755    4435 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:30:22.478771    4435 main.go:141] libmachine: Decoding PEM data...
	I0615 10:30:22.478778    4435 main.go:141] libmachine: Parsing certificate...
	I0615 10:30:22.479144    4435 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:30:22.594289    4435 main.go:141] libmachine: Creating SSH key...
	I0615 10:30:22.768822    4435 main.go:141] libmachine: Creating Disk image...
	I0615 10:30:22.768828    4435 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:30:22.768997    4435 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kindnet-678000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kindnet-678000/disk.qcow2
	I0615 10:30:22.778056    4435 main.go:141] libmachine: STDOUT: 
	I0615 10:30:22.778076    4435 main.go:141] libmachine: STDERR: 
	I0615 10:30:22.778134    4435 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kindnet-678000/disk.qcow2 +20000M
	I0615 10:30:22.785275    4435 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:30:22.785287    4435 main.go:141] libmachine: STDERR: 
	I0615 10:30:22.785306    4435 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kindnet-678000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kindnet-678000/disk.qcow2
	I0615 10:30:22.785311    4435 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:30:22.785350    4435 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kindnet-678000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/kindnet-678000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kindnet-678000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:2c:e4:d8:0e:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kindnet-678000/disk.qcow2
	I0615 10:30:22.786807    4435 main.go:141] libmachine: STDOUT: 
	I0615 10:30:22.786818    4435 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:30:22.786834    4435 client.go:171] LocalClient.Create took 308.241208ms
	I0615 10:30:24.789006    4435 start.go:128] duration metric: createHost completed in 2.335641834s
	I0615 10:30:24.789056    4435 start.go:83] releasing machines lock for "kindnet-678000", held for 2.335747125s
	W0615 10:30:24.789127    4435 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:30:24.799957    4435 out.go:177] * Deleting "kindnet-678000" in qemu2 ...
	W0615 10:30:24.818881    4435 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:30:24.818907    4435 start.go:687] Will try again in 5 seconds ...
	I0615 10:30:29.821078    4435 start.go:365] acquiring machines lock for kindnet-678000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:30:29.821682    4435 start.go:369] acquired machines lock for "kindnet-678000" in 325.459µs
	I0615 10:30:29.821817    4435 start.go:93] Provisioning new machine with config: &{Name:kindnet-678000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:kindnet-678000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:30:29.822086    4435 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:30:29.830042    4435 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0615 10:30:29.875447    4435 start.go:159] libmachine.API.Create for "kindnet-678000" (driver="qemu2")
	I0615 10:30:29.875491    4435 client.go:168] LocalClient.Create starting
	I0615 10:30:29.875623    4435 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:30:29.875671    4435 main.go:141] libmachine: Decoding PEM data...
	I0615 10:30:29.875687    4435 main.go:141] libmachine: Parsing certificate...
	I0615 10:30:29.875772    4435 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:30:29.875800    4435 main.go:141] libmachine: Decoding PEM data...
	I0615 10:30:29.875818    4435 main.go:141] libmachine: Parsing certificate...
	I0615 10:30:29.876329    4435 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:30:30.004770    4435 main.go:141] libmachine: Creating SSH key...
	I0615 10:30:30.061148    4435 main.go:141] libmachine: Creating Disk image...
	I0615 10:30:30.061154    4435 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:30:30.061311    4435 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kindnet-678000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kindnet-678000/disk.qcow2
	I0615 10:30:30.069910    4435 main.go:141] libmachine: STDOUT: 
	I0615 10:30:30.069922    4435 main.go:141] libmachine: STDERR: 
	I0615 10:30:30.069973    4435 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kindnet-678000/disk.qcow2 +20000M
	I0615 10:30:30.077120    4435 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:30:30.077136    4435 main.go:141] libmachine: STDERR: 
	I0615 10:30:30.077160    4435 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kindnet-678000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kindnet-678000/disk.qcow2
	I0615 10:30:30.077166    4435 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:30:30.077199    4435 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kindnet-678000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/kindnet-678000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kindnet-678000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:e0:f3:54:c4:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kindnet-678000/disk.qcow2
	I0615 10:30:30.078786    4435 main.go:141] libmachine: STDOUT: 
	I0615 10:30:30.078801    4435 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:30:30.078812    4435 client.go:171] LocalClient.Create took 203.320375ms
	I0615 10:30:32.080992    4435 start.go:128] duration metric: createHost completed in 2.258918041s
	I0615 10:30:32.081162    4435 start.go:83] releasing machines lock for "kindnet-678000", held for 2.259383292s
	W0615 10:30:32.081607    4435 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-678000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-678000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:30:32.091278    4435 out.go:177] 
	W0615 10:30:32.095372    4435 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:30:32.095398    4435 out.go:239] * 
	* 
	W0615 10:30:32.098140    4435 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 10:30:32.108214    4435 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-678000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-678000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.734960667s)

                                                
                                                
-- stdout --
	* [calico-678000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node calico-678000 in cluster calico-678000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-678000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:30:34.335592    4549 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:30:34.335755    4549 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:30:34.335758    4549 out.go:309] Setting ErrFile to fd 2...
	I0615 10:30:34.335761    4549 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:30:34.335832    4549 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:30:34.336900    4549 out.go:303] Setting JSON to false
	I0615 10:30:34.351963    4549 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3605,"bootTime":1686846629,"procs":443,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:30:34.352033    4549 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:30:34.356902    4549 out.go:177] * [calico-678000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:30:34.363883    4549 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:30:34.366795    4549 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:30:34.363936    4549 notify.go:220] Checking for updates...
	I0615 10:30:34.372801    4549 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:30:34.374299    4549 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:30:34.376796    4549 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:30:34.379831    4549 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:30:34.383210    4549 config.go:182] Loaded profile config "multinode-506000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:30:34.383248    4549 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:30:34.387794    4549 out.go:177] * Using the qemu2 driver based on user configuration
	I0615 10:30:34.394845    4549 start.go:297] selected driver: qemu2
	I0615 10:30:34.394856    4549 start.go:884] validating driver "qemu2" against <nil>
	I0615 10:30:34.394867    4549 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:30:34.396774    4549 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0615 10:30:34.399748    4549 out.go:177] * Automatically selected the socket_vmnet network
	I0615 10:30:34.402949    4549 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 10:30:34.402972    4549 cni.go:84] Creating CNI manager for "calico"
	I0615 10:30:34.402977    4549 start_flags.go:314] Found "Calico" CNI - setting NetworkPlugin=cni
	I0615 10:30:34.402990    4549 start_flags.go:319] config:
	{Name:calico-678000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:calico-678000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:30:34.403084    4549 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:30:34.410810    4549 out.go:177] * Starting control plane node calico-678000 in cluster calico-678000
	I0615 10:30:34.414850    4549 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 10:30:34.414879    4549 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 10:30:34.414898    4549 cache.go:57] Caching tarball of preloaded images
	I0615 10:30:34.414953    4549 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 10:30:34.414964    4549 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 10:30:34.415031    4549 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/calico-678000/config.json ...
	I0615 10:30:34.415042    4549 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/calico-678000/config.json: {Name:mk7db173e6511d5042e5ac09d80f8429570ac968 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:30:34.415252    4549 start.go:365] acquiring machines lock for calico-678000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:30:34.415283    4549 start.go:369] acquired machines lock for "calico-678000" in 24.917µs
	I0615 10:30:34.415293    4549 start.go:93] Provisioning new machine with config: &{Name:calico-678000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.3 ClusterName:calico-678000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:30:34.415322    4549 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:30:34.423834    4549 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0615 10:30:34.440538    4549 start.go:159] libmachine.API.Create for "calico-678000" (driver="qemu2")
	I0615 10:30:34.440569    4549 client.go:168] LocalClient.Create starting
	I0615 10:30:34.440631    4549 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:30:34.440651    4549 main.go:141] libmachine: Decoding PEM data...
	I0615 10:30:34.440666    4549 main.go:141] libmachine: Parsing certificate...
	I0615 10:30:34.440710    4549 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:30:34.440725    4549 main.go:141] libmachine: Decoding PEM data...
	I0615 10:30:34.440735    4549 main.go:141] libmachine: Parsing certificate...
	I0615 10:30:34.441068    4549 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:30:34.552060    4549 main.go:141] libmachine: Creating SSH key...
	I0615 10:30:34.649721    4549 main.go:141] libmachine: Creating Disk image...
	I0615 10:30:34.649731    4549 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:30:34.649875    4549 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/calico-678000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/calico-678000/disk.qcow2
	I0615 10:30:34.658813    4549 main.go:141] libmachine: STDOUT: 
	I0615 10:30:34.658830    4549 main.go:141] libmachine: STDERR: 
	I0615 10:30:34.658890    4549 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/calico-678000/disk.qcow2 +20000M
	I0615 10:30:34.666038    4549 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:30:34.666050    4549 main.go:141] libmachine: STDERR: 
	I0615 10:30:34.666062    4549 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/calico-678000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/calico-678000/disk.qcow2
	I0615 10:30:34.666068    4549 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:30:34.666119    4549 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/calico-678000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/calico-678000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/calico-678000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:33:df:21:d8:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/calico-678000/disk.qcow2
	I0615 10:30:34.667645    4549 main.go:141] libmachine: STDOUT: 
	I0615 10:30:34.667657    4549 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:30:34.667674    4549 client.go:171] LocalClient.Create took 227.103334ms
	I0615 10:30:36.669859    4549 start.go:128] duration metric: createHost completed in 2.254500958s
	I0615 10:30:36.669917    4549 start.go:83] releasing machines lock for "calico-678000", held for 2.254659166s
	W0615 10:30:36.669962    4549 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:30:36.680548    4549 out.go:177] * Deleting "calico-678000" in qemu2 ...
	W0615 10:30:36.698461    4549 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:30:36.698491    4549 start.go:687] Will try again in 5 seconds ...
	I0615 10:30:41.700176    4549 start.go:365] acquiring machines lock for calico-678000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:30:41.700502    4549 start.go:369] acquired machines lock for "calico-678000" in 235.208µs
	I0615 10:30:41.700593    4549 start.go:93] Provisioning new machine with config: &{Name:calico-678000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.3 ClusterName:calico-678000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:30:41.700806    4549 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:30:41.708836    4549 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0615 10:30:41.753522    4549 start.go:159] libmachine.API.Create for "calico-678000" (driver="qemu2")
	I0615 10:30:41.753580    4549 client.go:168] LocalClient.Create starting
	I0615 10:30:41.753707    4549 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:30:41.753760    4549 main.go:141] libmachine: Decoding PEM data...
	I0615 10:30:41.753789    4549 main.go:141] libmachine: Parsing certificate...
	I0615 10:30:41.753871    4549 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:30:41.753903    4549 main.go:141] libmachine: Decoding PEM data...
	I0615 10:30:41.753917    4549 main.go:141] libmachine: Parsing certificate...
	I0615 10:30:41.754456    4549 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:30:41.876598    4549 main.go:141] libmachine: Creating SSH key...
	I0615 10:30:41.981517    4549 main.go:141] libmachine: Creating Disk image...
	I0615 10:30:41.981523    4549 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:30:41.981663    4549 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/calico-678000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/calico-678000/disk.qcow2
	I0615 10:30:41.989965    4549 main.go:141] libmachine: STDOUT: 
	I0615 10:30:41.989986    4549 main.go:141] libmachine: STDERR: 
	I0615 10:30:41.990043    4549 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/calico-678000/disk.qcow2 +20000M
	I0615 10:30:41.997014    4549 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:30:41.997027    4549 main.go:141] libmachine: STDERR: 
	I0615 10:30:41.997040    4549 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/calico-678000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/calico-678000/disk.qcow2
	I0615 10:30:41.997047    4549 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:30:41.997076    4549 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/calico-678000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/calico-678000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/calico-678000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:d0:2d:69:43:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/calico-678000/disk.qcow2
	I0615 10:30:41.998471    4549 main.go:141] libmachine: STDOUT: 
	I0615 10:30:41.998485    4549 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:30:41.998499    4549 client.go:171] LocalClient.Create took 244.918292ms
	I0615 10:30:44.000634    4549 start.go:128] duration metric: createHost completed in 2.299838584s
	I0615 10:30:44.000715    4549 start.go:83] releasing machines lock for "calico-678000", held for 2.300222667s
	W0615 10:30:44.001179    4549 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-678000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-678000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:30:44.012879    4549 out.go:177] 
	W0615 10:30:44.016856    4549 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:30:44.016892    4549 out.go:239] * 
	* 
	W0615 10:30:44.019836    4549 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 10:30:44.028832    4549 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-678000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-678000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.848750458s)

                                                
                                                
-- stdout --
	* [custom-flannel-678000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node custom-flannel-678000 in cluster custom-flannel-678000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-678000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:30:46.441850    4669 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:30:46.441999    4669 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:30:46.442002    4669 out.go:309] Setting ErrFile to fd 2...
	I0615 10:30:46.442004    4669 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:30:46.442070    4669 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:30:46.443133    4669 out.go:303] Setting JSON to false
	I0615 10:30:46.459597    4669 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3617,"bootTime":1686846629,"procs":445,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:30:46.459668    4669 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:30:46.463905    4669 out.go:177] * [custom-flannel-678000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:30:46.470817    4669 notify.go:220] Checking for updates...
	I0615 10:30:46.470825    4669 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:30:46.474781    4669 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:30:46.477764    4669 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:30:46.480751    4669 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:30:46.483768    4669 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:30:46.486730    4669 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:30:46.490045    4669 config.go:182] Loaded profile config "multinode-506000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:30:46.490107    4669 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:30:46.494746    4669 out.go:177] * Using the qemu2 driver based on user configuration
	I0615 10:30:46.501758    4669 start.go:297] selected driver: qemu2
	I0615 10:30:46.501763    4669 start.go:884] validating driver "qemu2" against <nil>
	I0615 10:30:46.501770    4669 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:30:46.503718    4669 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0615 10:30:46.506729    4669 out.go:177] * Automatically selected the socket_vmnet network
	I0615 10:30:46.508190    4669 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 10:30:46.508212    4669 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0615 10:30:46.508236    4669 start_flags.go:314] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0615 10:30:46.508243    4669 start_flags.go:319] config:
	{Name:custom-flannel-678000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:custom-flannel-678000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/sock
et_vmnet StaticIP:}
	I0615 10:30:46.508333    4669 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:30:46.515776    4669 out.go:177] * Starting control plane node custom-flannel-678000 in cluster custom-flannel-678000
	I0615 10:30:46.519711    4669 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 10:30:46.519734    4669 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 10:30:46.519748    4669 cache.go:57] Caching tarball of preloaded images
	I0615 10:30:46.519804    4669 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 10:30:46.519811    4669 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 10:30:46.519872    4669 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/custom-flannel-678000/config.json ...
	I0615 10:30:46.519883    4669 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/custom-flannel-678000/config.json: {Name:mkeede281b318182cf85ab64ac3fa871ea9c096c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:30:46.520091    4669 start.go:365] acquiring machines lock for custom-flannel-678000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:30:46.520125    4669 start.go:369] acquired machines lock for "custom-flannel-678000" in 24.708µs
	I0615 10:30:46.520137    4669 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-678000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.3 ClusterName:custom-flannel-678000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:30:46.520165    4669 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:30:46.528719    4669 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0615 10:30:46.545390    4669 start.go:159] libmachine.API.Create for "custom-flannel-678000" (driver="qemu2")
	I0615 10:30:46.545412    4669 client.go:168] LocalClient.Create starting
	I0615 10:30:46.545471    4669 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:30:46.545496    4669 main.go:141] libmachine: Decoding PEM data...
	I0615 10:30:46.545505    4669 main.go:141] libmachine: Parsing certificate...
	I0615 10:30:46.545554    4669 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:30:46.545570    4669 main.go:141] libmachine: Decoding PEM data...
	I0615 10:30:46.545579    4669 main.go:141] libmachine: Parsing certificate...
	I0615 10:30:46.545906    4669 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:30:46.675114    4669 main.go:141] libmachine: Creating SSH key...
	I0615 10:30:46.851193    4669 main.go:141] libmachine: Creating Disk image...
	I0615 10:30:46.851199    4669 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:30:46.851374    4669 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/custom-flannel-678000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/custom-flannel-678000/disk.qcow2
	I0615 10:30:46.860367    4669 main.go:141] libmachine: STDOUT: 
	I0615 10:30:46.860389    4669 main.go:141] libmachine: STDERR: 
	I0615 10:30:46.860455    4669 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/custom-flannel-678000/disk.qcow2 +20000M
	I0615 10:30:46.867703    4669 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:30:46.867716    4669 main.go:141] libmachine: STDERR: 
	I0615 10:30:46.867734    4669 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/custom-flannel-678000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/custom-flannel-678000/disk.qcow2
	I0615 10:30:46.867739    4669 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:30:46.867770    4669 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/custom-flannel-678000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/custom-flannel-678000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/custom-flannel-678000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:0f:39:f1:59:cc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/custom-flannel-678000/disk.qcow2
	I0615 10:30:46.869269    4669 main.go:141] libmachine: STDOUT: 
	I0615 10:30:46.869286    4669 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:30:46.869303    4669 client.go:171] LocalClient.Create took 323.892792ms
	I0615 10:30:48.871759    4669 start.go:128] duration metric: createHost completed in 2.351602875s
	I0615 10:30:48.871848    4669 start.go:83] releasing machines lock for "custom-flannel-678000", held for 2.351751875s
	W0615 10:30:48.871908    4669 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:30:48.880325    4669 out.go:177] * Deleting "custom-flannel-678000" in qemu2 ...
	W0615 10:30:48.899934    4669 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:30:48.899965    4669 start.go:687] Will try again in 5 seconds ...
	I0615 10:30:53.902191    4669 start.go:365] acquiring machines lock for custom-flannel-678000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:30:53.902768    4669 start.go:369] acquired machines lock for "custom-flannel-678000" in 477.125µs
	I0615 10:30:53.902881    4669 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-678000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.3 ClusterName:custom-flannel-678000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:30:53.903188    4669 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:30:53.911785    4669 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0615 10:30:53.959696    4669 start.go:159] libmachine.API.Create for "custom-flannel-678000" (driver="qemu2")
	I0615 10:30:53.959749    4669 client.go:168] LocalClient.Create starting
	I0615 10:30:53.959876    4669 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:30:53.959920    4669 main.go:141] libmachine: Decoding PEM data...
	I0615 10:30:53.959936    4669 main.go:141] libmachine: Parsing certificate...
	I0615 10:30:53.960014    4669 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:30:53.960050    4669 main.go:141] libmachine: Decoding PEM data...
	I0615 10:30:53.960070    4669 main.go:141] libmachine: Parsing certificate...
	I0615 10:30:53.960594    4669 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:30:54.091777    4669 main.go:141] libmachine: Creating SSH key...
	I0615 10:30:54.203275    4669 main.go:141] libmachine: Creating Disk image...
	I0615 10:30:54.203283    4669 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:30:54.203426    4669 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/custom-flannel-678000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/custom-flannel-678000/disk.qcow2
	I0615 10:30:54.211911    4669 main.go:141] libmachine: STDOUT: 
	I0615 10:30:54.211926    4669 main.go:141] libmachine: STDERR: 
	I0615 10:30:54.211979    4669 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/custom-flannel-678000/disk.qcow2 +20000M
	I0615 10:30:54.219085    4669 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:30:54.219097    4669 main.go:141] libmachine: STDERR: 
	I0615 10:30:54.219107    4669 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/custom-flannel-678000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/custom-flannel-678000/disk.qcow2
	I0615 10:30:54.219114    4669 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:30:54.219156    4669 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/custom-flannel-678000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/custom-flannel-678000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/custom-flannel-678000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:37:08:8c:a4:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/custom-flannel-678000/disk.qcow2
	I0615 10:30:54.220662    4669 main.go:141] libmachine: STDOUT: 
	I0615 10:30:54.220677    4669 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:30:54.220689    4669 client.go:171] LocalClient.Create took 260.934916ms
	I0615 10:30:56.222839    4669 start.go:128] duration metric: createHost completed in 2.319647291s
	I0615 10:30:56.222889    4669 start.go:83] releasing machines lock for "custom-flannel-678000", held for 2.320131958s
	W0615 10:30:56.223216    4669 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-678000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-678000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:30:56.232829    4669 out.go:177] 
	W0615 10:30:56.237938    4669 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:30:56.237982    4669 out.go:239] * 
	* 
	W0615 10:30:56.240505    4669 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 10:30:56.249895    4669 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-678000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-678000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.694587916s)

                                                
                                                
-- stdout --
	* [false-678000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node false-678000 in cluster false-678000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-678000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:30:58.625952    4786 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:30:58.626094    4786 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:30:58.626097    4786 out.go:309] Setting ErrFile to fd 2...
	I0615 10:30:58.626100    4786 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:30:58.626173    4786 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:30:58.627224    4786 out.go:303] Setting JSON to false
	I0615 10:30:58.642213    4786 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3629,"bootTime":1686846629,"procs":443,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:30:58.642290    4786 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:30:58.646521    4786 out.go:177] * [false-678000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:30:58.653329    4786 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:30:58.653403    4786 notify.go:220] Checking for updates...
	I0615 10:30:58.660314    4786 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:30:58.663316    4786 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:30:58.666339    4786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:30:58.669294    4786 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:30:58.672321    4786 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:30:58.675671    4786 config.go:182] Loaded profile config "multinode-506000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:30:58.675720    4786 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:30:58.679254    4786 out.go:177] * Using the qemu2 driver based on user configuration
	I0615 10:30:58.686324    4786 start.go:297] selected driver: qemu2
	I0615 10:30:58.686328    4786 start.go:884] validating driver "qemu2" against <nil>
	I0615 10:30:58.686334    4786 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:30:58.688148    4786 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0615 10:30:58.689507    4786 out.go:177] * Automatically selected the socket_vmnet network
	I0615 10:30:58.692371    4786 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 10:30:58.692401    4786 cni.go:84] Creating CNI manager for "false"
	I0615 10:30:58.692405    4786 start_flags.go:319] config:
	{Name:false-678000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:false-678000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:30:58.692495    4786 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:30:58.700305    4786 out.go:177] * Starting control plane node false-678000 in cluster false-678000
	I0615 10:30:58.704319    4786 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 10:30:58.704343    4786 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 10:30:58.704356    4786 cache.go:57] Caching tarball of preloaded images
	I0615 10:30:58.704418    4786 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 10:30:58.704424    4786 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 10:30:58.704483    4786 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/false-678000/config.json ...
	I0615 10:30:58.704495    4786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/false-678000/config.json: {Name:mk2273e8b7a7a2d2a8a4ecf4da035c248e1d63d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:30:58.704705    4786 start.go:365] acquiring machines lock for false-678000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:30:58.704736    4786 start.go:369] acquired machines lock for "false-678000" in 25.208µs
	I0615 10:30:58.704746    4786 start.go:93] Provisioning new machine with config: &{Name:false-678000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.3 ClusterName:false-678000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:30:58.704791    4786 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:30:58.709314    4786 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0615 10:30:58.725360    4786 start.go:159] libmachine.API.Create for "false-678000" (driver="qemu2")
	I0615 10:30:58.725384    4786 client.go:168] LocalClient.Create starting
	I0615 10:30:58.725448    4786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:30:58.725468    4786 main.go:141] libmachine: Decoding PEM data...
	I0615 10:30:58.725476    4786 main.go:141] libmachine: Parsing certificate...
	I0615 10:30:58.725521    4786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:30:58.725536    4786 main.go:141] libmachine: Decoding PEM data...
	I0615 10:30:58.725544    4786 main.go:141] libmachine: Parsing certificate...
	I0615 10:30:58.725838    4786 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:30:58.836586    4786 main.go:141] libmachine: Creating SSH key...
	I0615 10:30:58.902172    4786 main.go:141] libmachine: Creating Disk image...
	I0615 10:30:58.902180    4786 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:30:58.902331    4786 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/false-678000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/false-678000/disk.qcow2
	I0615 10:30:58.910808    4786 main.go:141] libmachine: STDOUT: 
	I0615 10:30:58.910826    4786 main.go:141] libmachine: STDERR: 
	I0615 10:30:58.910881    4786 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/false-678000/disk.qcow2 +20000M
	I0615 10:30:58.918329    4786 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:30:58.918341    4786 main.go:141] libmachine: STDERR: 
	I0615 10:30:58.918359    4786 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/false-678000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/false-678000/disk.qcow2
	I0615 10:30:58.918373    4786 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:30:58.918414    4786 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/false-678000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/false-678000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/false-678000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:47:4e:e0:fc:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/false-678000/disk.qcow2
	I0615 10:30:58.920017    4786 main.go:141] libmachine: STDOUT: 
	I0615 10:30:58.920027    4786 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:30:58.920042    4786 client.go:171] LocalClient.Create took 194.656791ms
	I0615 10:31:00.922175    4786 start.go:128] duration metric: createHost completed in 2.217402209s
	I0615 10:31:00.922246    4786 start.go:83] releasing machines lock for "false-678000", held for 2.217534333s
	W0615 10:31:00.922334    4786 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:31:00.933323    4786 out.go:177] * Deleting "false-678000" in qemu2 ...
	W0615 10:31:00.954256    4786 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:31:00.954289    4786 start.go:687] Will try again in 5 seconds ...
	I0615 10:31:05.956418    4786 start.go:365] acquiring machines lock for false-678000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:31:05.956982    4786 start.go:369] acquired machines lock for "false-678000" in 454.417µs
	I0615 10:31:05.957099    4786 start.go:93] Provisioning new machine with config: &{Name:false-678000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.3 ClusterName:false-678000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:31:05.957422    4786 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:31:05.962888    4786 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0615 10:31:06.011518    4786 start.go:159] libmachine.API.Create for "false-678000" (driver="qemu2")
	I0615 10:31:06.011568    4786 client.go:168] LocalClient.Create starting
	I0615 10:31:06.011692    4786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:31:06.011758    4786 main.go:141] libmachine: Decoding PEM data...
	I0615 10:31:06.011787    4786 main.go:141] libmachine: Parsing certificate...
	I0615 10:31:06.011862    4786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:31:06.011893    4786 main.go:141] libmachine: Decoding PEM data...
	I0615 10:31:06.011927    4786 main.go:141] libmachine: Parsing certificate...
	I0615 10:31:06.012486    4786 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:31:06.140582    4786 main.go:141] libmachine: Creating SSH key...
	I0615 10:31:06.232265    4786 main.go:141] libmachine: Creating Disk image...
	I0615 10:31:06.232271    4786 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:31:06.232457    4786 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/false-678000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/false-678000/disk.qcow2
	I0615 10:31:06.241216    4786 main.go:141] libmachine: STDOUT: 
	I0615 10:31:06.241230    4786 main.go:141] libmachine: STDERR: 
	I0615 10:31:06.241285    4786 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/false-678000/disk.qcow2 +20000M
	I0615 10:31:06.248422    4786 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:31:06.248434    4786 main.go:141] libmachine: STDERR: 
	I0615 10:31:06.248450    4786 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/false-678000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/false-678000/disk.qcow2
	I0615 10:31:06.248455    4786 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:31:06.248503    4786 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/false-678000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/false-678000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/false-678000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:60:5c:27:6b:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/false-678000/disk.qcow2
	I0615 10:31:06.250029    4786 main.go:141] libmachine: STDOUT: 
	I0615 10:31:06.250042    4786 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:31:06.250054    4786 client.go:171] LocalClient.Create took 238.480834ms
	I0615 10:31:08.252204    4786 start.go:128] duration metric: createHost completed in 2.294786459s
	I0615 10:31:08.252302    4786 start.go:83] releasing machines lock for "false-678000", held for 2.295317s
	W0615 10:31:08.252786    4786 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-678000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-678000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:31:08.262569    4786 out.go:177] 
	W0615 10:31:08.266672    4786 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:31:08.266696    4786 out.go:239] * 
	* 
	W0615 10:31:08.269430    4786 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 10:31:08.279377    4786 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-678000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
E0615 10:31:17.262445    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/functional-822000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-678000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.806212334s)

                                                
                                                
-- stdout --
	* [enable-default-cni-678000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node enable-default-cni-678000 in cluster enable-default-cni-678000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-678000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:31:10.470596    4899 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:31:10.470716    4899 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:31:10.470719    4899 out.go:309] Setting ErrFile to fd 2...
	I0615 10:31:10.470721    4899 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:31:10.470788    4899 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:31:10.471814    4899 out.go:303] Setting JSON to false
	I0615 10:31:10.486925    4899 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3641,"bootTime":1686846629,"procs":441,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:31:10.486989    4899 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:31:10.490724    4899 out.go:177] * [enable-default-cni-678000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:31:10.498741    4899 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:31:10.498808    4899 notify.go:220] Checking for updates...
	I0615 10:31:10.502670    4899 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:31:10.505717    4899 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:31:10.508721    4899 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:31:10.511723    4899 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:31:10.514726    4899 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:31:10.517987    4899 config.go:182] Loaded profile config "multinode-506000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:31:10.518026    4899 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:31:10.522712    4899 out.go:177] * Using the qemu2 driver based on user configuration
	I0615 10:31:10.529728    4899 start.go:297] selected driver: qemu2
	I0615 10:31:10.529733    4899 start.go:884] validating driver "qemu2" against <nil>
	I0615 10:31:10.529740    4899 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:31:10.531620    4899 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0615 10:31:10.534670    4899 out.go:177] * Automatically selected the socket_vmnet network
	E0615 10:31:10.537785    4899 start_flags.go:453] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0615 10:31:10.537796    4899 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 10:31:10.537820    4899 cni.go:84] Creating CNI manager for "bridge"
	I0615 10:31:10.537825    4899 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0615 10:31:10.537833    4899 start_flags.go:319] config:
	{Name:enable-default-cni-678000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:enable-default-cni-678000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Sta
ticIP:}
	I0615 10:31:10.537920    4899 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:31:10.545687    4899 out.go:177] * Starting control plane node enable-default-cni-678000 in cluster enable-default-cni-678000
	I0615 10:31:10.549678    4899 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 10:31:10.549703    4899 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 10:31:10.549713    4899 cache.go:57] Caching tarball of preloaded images
	I0615 10:31:10.549781    4899 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 10:31:10.549786    4899 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 10:31:10.549845    4899 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/enable-default-cni-678000/config.json ...
	I0615 10:31:10.549856    4899 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/enable-default-cni-678000/config.json: {Name:mk997d87c1e57612a87e57b7f791575bbba985f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:31:10.550055    4899 start.go:365] acquiring machines lock for enable-default-cni-678000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:31:10.550086    4899 start.go:369] acquired machines lock for "enable-default-cni-678000" in 24.875µs
	I0615 10:31:10.550096    4899 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-678000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:enable-default-cni-678000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:31:10.550122    4899 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:31:10.558703    4899 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0615 10:31:10.575259    4899 start.go:159] libmachine.API.Create for "enable-default-cni-678000" (driver="qemu2")
	I0615 10:31:10.575285    4899 client.go:168] LocalClient.Create starting
	I0615 10:31:10.575346    4899 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:31:10.575365    4899 main.go:141] libmachine: Decoding PEM data...
	I0615 10:31:10.575374    4899 main.go:141] libmachine: Parsing certificate...
	I0615 10:31:10.575422    4899 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:31:10.575436    4899 main.go:141] libmachine: Decoding PEM data...
	I0615 10:31:10.575443    4899 main.go:141] libmachine: Parsing certificate...
	I0615 10:31:10.575780    4899 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:31:10.688986    4899 main.go:141] libmachine: Creating SSH key...
	I0615 10:31:10.870603    4899 main.go:141] libmachine: Creating Disk image...
	I0615 10:31:10.870611    4899 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:31:10.870771    4899 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/enable-default-cni-678000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/enable-default-cni-678000/disk.qcow2
	I0615 10:31:10.879720    4899 main.go:141] libmachine: STDOUT: 
	I0615 10:31:10.879733    4899 main.go:141] libmachine: STDERR: 
	I0615 10:31:10.879788    4899 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/enable-default-cni-678000/disk.qcow2 +20000M
	I0615 10:31:10.886788    4899 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:31:10.886800    4899 main.go:141] libmachine: STDERR: 
	I0615 10:31:10.886819    4899 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/enable-default-cni-678000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/enable-default-cni-678000/disk.qcow2
	I0615 10:31:10.886826    4899 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:31:10.886862    4899 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/enable-default-cni-678000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/enable-default-cni-678000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/enable-default-cni-678000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:25:94:3a:bf:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/enable-default-cni-678000/disk.qcow2
	I0615 10:31:10.888324    4899 main.go:141] libmachine: STDOUT: 
	I0615 10:31:10.888336    4899 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:31:10.888355    4899 client.go:171] LocalClient.Create took 313.066542ms
	I0615 10:31:12.890474    4899 start.go:128] duration metric: createHost completed in 2.340373333s
	I0615 10:31:12.890540    4899 start.go:83] releasing machines lock for "enable-default-cni-678000", held for 2.340482875s
	W0615 10:31:12.890655    4899 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:31:12.902002    4899 out.go:177] * Deleting "enable-default-cni-678000" in qemu2 ...
	W0615 10:31:12.923759    4899 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:31:12.923804    4899 start.go:687] Will try again in 5 seconds ...
	I0615 10:31:17.926035    4899 start.go:365] acquiring machines lock for enable-default-cni-678000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:31:17.926596    4899 start.go:369] acquired machines lock for "enable-default-cni-678000" in 442.166µs
	I0615 10:31:17.926787    4899 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-678000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:enable-default-cni-678000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:31:17.927066    4899 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:31:17.936805    4899 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0615 10:31:17.986410    4899 start.go:159] libmachine.API.Create for "enable-default-cni-678000" (driver="qemu2")
	I0615 10:31:17.986460    4899 client.go:168] LocalClient.Create starting
	I0615 10:31:17.986600    4899 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:31:17.986652    4899 main.go:141] libmachine: Decoding PEM data...
	I0615 10:31:17.986679    4899 main.go:141] libmachine: Parsing certificate...
	I0615 10:31:17.986757    4899 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:31:17.986793    4899 main.go:141] libmachine: Decoding PEM data...
	I0615 10:31:17.986807    4899 main.go:141] libmachine: Parsing certificate...
	I0615 10:31:17.987302    4899 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:31:18.110520    4899 main.go:141] libmachine: Creating SSH key...
	I0615 10:31:18.193266    4899 main.go:141] libmachine: Creating Disk image...
	I0615 10:31:18.193271    4899 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:31:18.193443    4899 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/enable-default-cni-678000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/enable-default-cni-678000/disk.qcow2
	I0615 10:31:18.202225    4899 main.go:141] libmachine: STDOUT: 
	I0615 10:31:18.202239    4899 main.go:141] libmachine: STDERR: 
	I0615 10:31:18.202292    4899 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/enable-default-cni-678000/disk.qcow2 +20000M
	I0615 10:31:18.209466    4899 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:31:18.209482    4899 main.go:141] libmachine: STDERR: 
	I0615 10:31:18.209511    4899 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/enable-default-cni-678000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/enable-default-cni-678000/disk.qcow2
	I0615 10:31:18.209515    4899 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:31:18.209546    4899 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/enable-default-cni-678000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/enable-default-cni-678000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/enable-default-cni-678000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:05:b7:cc:9e:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/enable-default-cni-678000/disk.qcow2
	I0615 10:31:18.211130    4899 main.go:141] libmachine: STDOUT: 
	I0615 10:31:18.211143    4899 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:31:18.211166    4899 client.go:171] LocalClient.Create took 224.704958ms
	I0615 10:31:20.213320    4899 start.go:128] duration metric: createHost completed in 2.28623375s
	I0615 10:31:20.213373    4899 start.go:83] releasing machines lock for "enable-default-cni-678000", held for 2.286789791s
	W0615 10:31:20.213764    4899 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-678000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-678000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:31:20.222275    4899 out.go:177] 
	W0615 10:31:20.226276    4899 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:31:20.226335    4899 out.go:239] * 
	* 
	W0615 10:31:20.229139    4899 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 10:31:20.236259    4899 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-678000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
E0615 10:31:23.540877    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-678000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.886356708s)

                                                
                                                
-- stdout --
	* [flannel-678000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node flannel-678000 in cluster flannel-678000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-678000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:31:22.419958    5009 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:31:22.420113    5009 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:31:22.420116    5009 out.go:309] Setting ErrFile to fd 2...
	I0615 10:31:22.420119    5009 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:31:22.420192    5009 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:31:22.421265    5009 out.go:303] Setting JSON to false
	I0615 10:31:22.436421    5009 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3653,"bootTime":1686846629,"procs":437,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:31:22.436491    5009 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:31:22.441074    5009 out.go:177] * [flannel-678000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:31:22.448029    5009 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:31:22.452015    5009 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:31:22.448079    5009 notify.go:220] Checking for updates...
	I0615 10:31:22.454976    5009 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:31:22.458054    5009 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:31:22.461021    5009 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:31:22.464051    5009 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:31:22.467382    5009 config.go:182] Loaded profile config "multinode-506000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:31:22.467422    5009 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:31:22.471971    5009 out.go:177] * Using the qemu2 driver based on user configuration
	I0615 10:31:22.479035    5009 start.go:297] selected driver: qemu2
	I0615 10:31:22.479040    5009 start.go:884] validating driver "qemu2" against <nil>
	I0615 10:31:22.479046    5009 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:31:22.480927    5009 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0615 10:31:22.485022    5009 out.go:177] * Automatically selected the socket_vmnet network
	I0615 10:31:22.488122    5009 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 10:31:22.488145    5009 cni.go:84] Creating CNI manager for "flannel"
	I0615 10:31:22.488149    5009 start_flags.go:314] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0615 10:31:22.488156    5009 start_flags.go:319] config:
	{Name:flannel-678000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:flannel-678000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:31:22.488244    5009 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:31:22.495973    5009 out.go:177] * Starting control plane node flannel-678000 in cluster flannel-678000
	I0615 10:31:22.500006    5009 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 10:31:22.500029    5009 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 10:31:22.500043    5009 cache.go:57] Caching tarball of preloaded images
	I0615 10:31:22.500105    5009 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 10:31:22.500114    5009 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 10:31:22.500183    5009 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/flannel-678000/config.json ...
	I0615 10:31:22.500196    5009 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/flannel-678000/config.json: {Name:mk8fd1a1d09181adb09d31590d8beda0551db103 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:31:22.500406    5009 start.go:365] acquiring machines lock for flannel-678000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:31:22.500439    5009 start.go:369] acquired machines lock for "flannel-678000" in 26.834µs
	I0615 10:31:22.500452    5009 start.go:93] Provisioning new machine with config: &{Name:flannel-678000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:flannel-678000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:31:22.500480    5009 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:31:22.509037    5009 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0615 10:31:22.526410    5009 start.go:159] libmachine.API.Create for "flannel-678000" (driver="qemu2")
	I0615 10:31:22.526439    5009 client.go:168] LocalClient.Create starting
	I0615 10:31:22.526497    5009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:31:22.526519    5009 main.go:141] libmachine: Decoding PEM data...
	I0615 10:31:22.526533    5009 main.go:141] libmachine: Parsing certificate...
	I0615 10:31:22.526582    5009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:31:22.526609    5009 main.go:141] libmachine: Decoding PEM data...
	I0615 10:31:22.526625    5009 main.go:141] libmachine: Parsing certificate...
	I0615 10:31:22.526966    5009 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:31:22.640123    5009 main.go:141] libmachine: Creating SSH key...
	I0615 10:31:22.894463    5009 main.go:141] libmachine: Creating Disk image...
	I0615 10:31:22.894474    5009 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:31:22.894677    5009 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/flannel-678000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/flannel-678000/disk.qcow2
	I0615 10:31:22.904109    5009 main.go:141] libmachine: STDOUT: 
	I0615 10:31:22.904133    5009 main.go:141] libmachine: STDERR: 
	I0615 10:31:22.904205    5009 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/flannel-678000/disk.qcow2 +20000M
	I0615 10:31:22.911462    5009 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:31:22.911475    5009 main.go:141] libmachine: STDERR: 
	I0615 10:31:22.911495    5009 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/flannel-678000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/flannel-678000/disk.qcow2
	I0615 10:31:22.911508    5009 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:31:22.911559    5009 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/flannel-678000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/flannel-678000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/flannel-678000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:2d:fa:9f:96:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/flannel-678000/disk.qcow2
	I0615 10:31:22.913102    5009 main.go:141] libmachine: STDOUT: 
	I0615 10:31:22.913115    5009 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:31:22.913134    5009 client.go:171] LocalClient.Create took 386.696792ms
	I0615 10:31:24.915268    5009 start.go:128] duration metric: createHost completed in 2.414811334s
	I0615 10:31:24.915335    5009 start.go:83] releasing machines lock for "flannel-678000", held for 2.414923916s
	W0615 10:31:24.915426    5009 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:31:24.925533    5009 out.go:177] * Deleting "flannel-678000" in qemu2 ...
	W0615 10:31:24.946037    5009 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:31:24.946064    5009 start.go:687] Will try again in 5 seconds ...
	I0615 10:31:29.948252    5009 start.go:365] acquiring machines lock for flannel-678000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:31:29.948799    5009 start.go:369] acquired machines lock for "flannel-678000" in 424.959µs
	I0615 10:31:29.948972    5009 start.go:93] Provisioning new machine with config: &{Name:flannel-678000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:flannel-678000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:31:29.949315    5009 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:31:29.959748    5009 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0615 10:31:30.007849    5009 start.go:159] libmachine.API.Create for "flannel-678000" (driver="qemu2")
	I0615 10:31:30.007895    5009 client.go:168] LocalClient.Create starting
	I0615 10:31:30.008042    5009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:31:30.008095    5009 main.go:141] libmachine: Decoding PEM data...
	I0615 10:31:30.008118    5009 main.go:141] libmachine: Parsing certificate...
	I0615 10:31:30.008200    5009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:31:30.008227    5009 main.go:141] libmachine: Decoding PEM data...
	I0615 10:31:30.008239    5009 main.go:141] libmachine: Parsing certificate...
	I0615 10:31:30.008808    5009 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:31:30.132408    5009 main.go:141] libmachine: Creating SSH key...
	I0615 10:31:30.222790    5009 main.go:141] libmachine: Creating Disk image...
	I0615 10:31:30.222795    5009 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:31:30.222944    5009 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/flannel-678000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/flannel-678000/disk.qcow2
	I0615 10:31:30.231476    5009 main.go:141] libmachine: STDOUT: 
	I0615 10:31:30.231493    5009 main.go:141] libmachine: STDERR: 
	I0615 10:31:30.231561    5009 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/flannel-678000/disk.qcow2 +20000M
	I0615 10:31:30.238678    5009 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:31:30.238692    5009 main.go:141] libmachine: STDERR: 
	I0615 10:31:30.238719    5009 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/flannel-678000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/flannel-678000/disk.qcow2
	I0615 10:31:30.238728    5009 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:31:30.238763    5009 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/flannel-678000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/flannel-678000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/flannel-678000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:24:70:c0:d5:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/flannel-678000/disk.qcow2
	I0615 10:31:30.240342    5009 main.go:141] libmachine: STDOUT: 
	I0615 10:31:30.240355    5009 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:31:30.240368    5009 client.go:171] LocalClient.Create took 232.469583ms
	I0615 10:31:32.242502    5009 start.go:128] duration metric: createHost completed in 2.293200042s
	I0615 10:31:32.242574    5009 start.go:83] releasing machines lock for "flannel-678000", held for 2.293791333s
	W0615 10:31:32.243020    5009 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-678000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-678000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:31:32.250430    5009 out.go:177] 
	W0615 10:31:32.254604    5009 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:31:32.254627    5009 out.go:239] * 
	* 
	W0615 10:31:32.257267    5009 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 10:31:32.264537    5009 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-678000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-678000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.678845084s)

                                                
                                                
-- stdout --
	* [bridge-678000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node bridge-678000 in cluster bridge-678000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-678000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:31:34.655439    5129 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:31:34.655575    5129 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:31:34.655578    5129 out.go:309] Setting ErrFile to fd 2...
	I0615 10:31:34.655580    5129 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:31:34.655649    5129 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:31:34.656639    5129 out.go:303] Setting JSON to false
	I0615 10:31:34.671634    5129 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3665,"bootTime":1686846629,"procs":440,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:31:34.671713    5129 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:31:34.676472    5129 out.go:177] * [bridge-678000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:31:34.687270    5129 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:31:34.683387    5129 notify.go:220] Checking for updates...
	I0615 10:31:34.692315    5129 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:31:34.695350    5129 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:31:34.696687    5129 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:31:34.699345    5129 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:31:34.702370    5129 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:31:34.705675    5129 config.go:182] Loaded profile config "multinode-506000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:31:34.705710    5129 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:31:34.710274    5129 out.go:177] * Using the qemu2 driver based on user configuration
	I0615 10:31:34.717305    5129 start.go:297] selected driver: qemu2
	I0615 10:31:34.717310    5129 start.go:884] validating driver "qemu2" against <nil>
	I0615 10:31:34.717315    5129 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:31:34.719208    5129 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0615 10:31:34.722333    5129 out.go:177] * Automatically selected the socket_vmnet network
	I0615 10:31:34.725439    5129 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 10:31:34.725461    5129 cni.go:84] Creating CNI manager for "bridge"
	I0615 10:31:34.725472    5129 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0615 10:31:34.725478    5129 start_flags.go:319] config:
	{Name:bridge-678000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:bridge-678000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:31:34.725563    5129 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:31:34.733384    5129 out.go:177] * Starting control plane node bridge-678000 in cluster bridge-678000
	I0615 10:31:34.737338    5129 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 10:31:34.737360    5129 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 10:31:34.737375    5129 cache.go:57] Caching tarball of preloaded images
	I0615 10:31:34.737436    5129 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 10:31:34.737442    5129 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 10:31:34.737510    5129 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/bridge-678000/config.json ...
	I0615 10:31:34.737522    5129 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/bridge-678000/config.json: {Name:mk1c760c6f7200cde768e986b7d59584789a9bde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:31:34.737727    5129 start.go:365] acquiring machines lock for bridge-678000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:31:34.737758    5129 start.go:369] acquired machines lock for "bridge-678000" in 24.667µs
	I0615 10:31:34.737767    5129 start.go:93] Provisioning new machine with config: &{Name:bridge-678000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.3 ClusterName:bridge-678000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:31:34.737794    5129 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:31:34.746346    5129 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0615 10:31:34.762973    5129 start.go:159] libmachine.API.Create for "bridge-678000" (driver="qemu2")
	I0615 10:31:34.762996    5129 client.go:168] LocalClient.Create starting
	I0615 10:31:34.763050    5129 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:31:34.763072    5129 main.go:141] libmachine: Decoding PEM data...
	I0615 10:31:34.763081    5129 main.go:141] libmachine: Parsing certificate...
	I0615 10:31:34.763119    5129 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:31:34.763133    5129 main.go:141] libmachine: Decoding PEM data...
	I0615 10:31:34.763141    5129 main.go:141] libmachine: Parsing certificate...
	I0615 10:31:34.763484    5129 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:31:34.873420    5129 main.go:141] libmachine: Creating SSH key...
	I0615 10:31:34.931069    5129 main.go:141] libmachine: Creating Disk image...
	I0615 10:31:34.931080    5129 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:31:34.931245    5129 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/bridge-678000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/bridge-678000/disk.qcow2
	I0615 10:31:34.939699    5129 main.go:141] libmachine: STDOUT: 
	I0615 10:31:34.939712    5129 main.go:141] libmachine: STDERR: 
	I0615 10:31:34.939761    5129 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/bridge-678000/disk.qcow2 +20000M
	I0615 10:31:34.947034    5129 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:31:34.947045    5129 main.go:141] libmachine: STDERR: 
	I0615 10:31:34.947062    5129 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/bridge-678000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/bridge-678000/disk.qcow2
	I0615 10:31:34.947067    5129 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:31:34.947098    5129 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/bridge-678000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/bridge-678000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/bridge-678000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:26:24:2d:8c:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/bridge-678000/disk.qcow2
	I0615 10:31:34.948639    5129 main.go:141] libmachine: STDOUT: 
	I0615 10:31:34.948652    5129 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:31:34.948671    5129 client.go:171] LocalClient.Create took 185.671958ms
	I0615 10:31:36.950830    5129 start.go:128] duration metric: createHost completed in 2.213050208s
	I0615 10:31:36.950880    5129 start.go:83] releasing machines lock for "bridge-678000", held for 2.213148167s
	W0615 10:31:36.950966    5129 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:31:36.959266    5129 out.go:177] * Deleting "bridge-678000" in qemu2 ...
	W0615 10:31:36.979203    5129 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:31:36.979228    5129 start.go:687] Will try again in 5 seconds ...
	I0615 10:31:41.981420    5129 start.go:365] acquiring machines lock for bridge-678000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:31:41.981926    5129 start.go:369] acquired machines lock for "bridge-678000" in 404.209µs
	I0615 10:31:41.982064    5129 start.go:93] Provisioning new machine with config: &{Name:bridge-678000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.3 ClusterName:bridge-678000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:31:41.982346    5129 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:31:41.988024    5129 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0615 10:31:42.035169    5129 start.go:159] libmachine.API.Create for "bridge-678000" (driver="qemu2")
	I0615 10:31:42.035206    5129 client.go:168] LocalClient.Create starting
	I0615 10:31:42.035327    5129 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:31:42.035368    5129 main.go:141] libmachine: Decoding PEM data...
	I0615 10:31:42.035385    5129 main.go:141] libmachine: Parsing certificate...
	I0615 10:31:42.035468    5129 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:31:42.035506    5129 main.go:141] libmachine: Decoding PEM data...
	I0615 10:31:42.035519    5129 main.go:141] libmachine: Parsing certificate...
	I0615 10:31:42.036019    5129 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:31:42.162364    5129 main.go:141] libmachine: Creating SSH key...
	I0615 10:31:42.247878    5129 main.go:141] libmachine: Creating Disk image...
	I0615 10:31:42.247884    5129 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:31:42.248030    5129 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/bridge-678000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/bridge-678000/disk.qcow2
	I0615 10:31:42.256607    5129 main.go:141] libmachine: STDOUT: 
	I0615 10:31:42.256622    5129 main.go:141] libmachine: STDERR: 
	I0615 10:31:42.256679    5129 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/bridge-678000/disk.qcow2 +20000M
	I0615 10:31:42.263857    5129 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:31:42.263890    5129 main.go:141] libmachine: STDERR: 
	I0615 10:31:42.263905    5129 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/bridge-678000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/bridge-678000/disk.qcow2
	I0615 10:31:42.263913    5129 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:31:42.263948    5129 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/bridge-678000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/bridge-678000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/bridge-678000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:fc:cf:4c:f2:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/bridge-678000/disk.qcow2
	I0615 10:31:42.265557    5129 main.go:141] libmachine: STDOUT: 
	I0615 10:31:42.265569    5129 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:31:42.265585    5129 client.go:171] LocalClient.Create took 230.378791ms
	I0615 10:31:44.267725    5129 start.go:128] duration metric: createHost completed in 2.28535975s
	I0615 10:31:44.267793    5129 start.go:83] releasing machines lock for "bridge-678000", held for 2.285878541s
	W0615 10:31:44.268333    5129 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-678000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-678000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:31:44.277953    5129 out.go:177] 
	W0615 10:31:44.281995    5129 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:31:44.282022    5129 out.go:239] * 
	* 
	W0615 10:31:44.284621    5129 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 10:31:44.293916    5129 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-678000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-678000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.69979925s)

                                                
                                                
-- stdout --
	* [kubenet-678000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubenet-678000 in cluster kubenet-678000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-678000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:31:46.472849    5238 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:31:46.472972    5238 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:31:46.472975    5238 out.go:309] Setting ErrFile to fd 2...
	I0615 10:31:46.472978    5238 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:31:46.473044    5238 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:31:46.474049    5238 out.go:303] Setting JSON to false
	I0615 10:31:46.489269    5238 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3677,"bootTime":1686846629,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:31:46.489346    5238 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:31:46.495043    5238 out.go:177] * [kubenet-678000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:31:46.503019    5238 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:31:46.503079    5238 notify.go:220] Checking for updates...
	I0615 10:31:46.507033    5238 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:31:46.508469    5238 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:31:46.511026    5238 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:31:46.514022    5238 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:31:46.517078    5238 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:31:46.520321    5238 config.go:182] Loaded profile config "multinode-506000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:31:46.520360    5238 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:31:46.524953    5238 out.go:177] * Using the qemu2 driver based on user configuration
	I0615 10:31:46.531989    5238 start.go:297] selected driver: qemu2
	I0615 10:31:46.531994    5238 start.go:884] validating driver "qemu2" against <nil>
	I0615 10:31:46.532000    5238 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:31:46.533885    5238 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0615 10:31:46.537001    5238 out.go:177] * Automatically selected the socket_vmnet network
	I0615 10:31:46.540138    5238 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 10:31:46.540159    5238 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0615 10:31:46.540163    5238 start_flags.go:319] config:
	{Name:kubenet-678000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:kubenet-678000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:31:46.540252    5238 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:31:46.548006    5238 out.go:177] * Starting control plane node kubenet-678000 in cluster kubenet-678000
	I0615 10:31:46.550981    5238 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 10:31:46.551004    5238 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 10:31:46.551018    5238 cache.go:57] Caching tarball of preloaded images
	I0615 10:31:46.551071    5238 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 10:31:46.551079    5238 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 10:31:46.551146    5238 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/kubenet-678000/config.json ...
	I0615 10:31:46.551157    5238 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/kubenet-678000/config.json: {Name:mk62bca67b8a250a586fdc629ba93a9fc0cb804b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:31:46.551361    5238 start.go:365] acquiring machines lock for kubenet-678000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:31:46.551392    5238 start.go:369] acquired machines lock for "kubenet-678000" in 24.875µs
	I0615 10:31:46.551401    5238 start.go:93] Provisioning new machine with config: &{Name:kubenet-678000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:kubenet-678000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:31:46.551426    5238 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:31:46.560034    5238 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0615 10:31:46.576601    5238 start.go:159] libmachine.API.Create for "kubenet-678000" (driver="qemu2")
	I0615 10:31:46.576628    5238 client.go:168] LocalClient.Create starting
	I0615 10:31:46.576686    5238 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:31:46.576708    5238 main.go:141] libmachine: Decoding PEM data...
	I0615 10:31:46.576719    5238 main.go:141] libmachine: Parsing certificate...
	I0615 10:31:46.576763    5238 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:31:46.576787    5238 main.go:141] libmachine: Decoding PEM data...
	I0615 10:31:46.576793    5238 main.go:141] libmachine: Parsing certificate...
	I0615 10:31:46.577131    5238 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:31:46.681740    5238 main.go:141] libmachine: Creating SSH key...
	I0615 10:31:46.744280    5238 main.go:141] libmachine: Creating Disk image...
	I0615 10:31:46.744290    5238 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:31:46.744454    5238 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubenet-678000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubenet-678000/disk.qcow2
	I0615 10:31:46.753027    5238 main.go:141] libmachine: STDOUT: 
	I0615 10:31:46.753041    5238 main.go:141] libmachine: STDERR: 
	I0615 10:31:46.753083    5238 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubenet-678000/disk.qcow2 +20000M
	I0615 10:31:46.760160    5238 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:31:46.760173    5238 main.go:141] libmachine: STDERR: 
	I0615 10:31:46.760187    5238 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubenet-678000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubenet-678000/disk.qcow2
	I0615 10:31:46.760195    5238 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:31:46.760244    5238 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubenet-678000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubenet-678000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubenet-678000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:55:91:92:02:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubenet-678000/disk.qcow2
	I0615 10:31:46.761720    5238 main.go:141] libmachine: STDOUT: 
	I0615 10:31:46.761733    5238 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:31:46.761751    5238 client.go:171] LocalClient.Create took 185.12125ms
	I0615 10:31:48.763923    5238 start.go:128] duration metric: createHost completed in 2.212500209s
	I0615 10:31:48.764019    5238 start.go:83] releasing machines lock for "kubenet-678000", held for 2.212652833s
	W0615 10:31:48.764081    5238 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:31:48.776405    5238 out.go:177] * Deleting "kubenet-678000" in qemu2 ...
	W0615 10:31:48.795905    5238 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:31:48.795939    5238 start.go:687] Will try again in 5 seconds ...
	I0615 10:31:53.798035    5238 start.go:365] acquiring machines lock for kubenet-678000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:31:53.798686    5238 start.go:369] acquired machines lock for "kubenet-678000" in 563.291µs
	I0615 10:31:53.798805    5238 start.go:93] Provisioning new machine with config: &{Name:kubenet-678000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:kubenet-678000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:31:53.799116    5238 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:31:53.808629    5238 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0615 10:31:53.855417    5238 start.go:159] libmachine.API.Create for "kubenet-678000" (driver="qemu2")
	I0615 10:31:53.855459    5238 client.go:168] LocalClient.Create starting
	I0615 10:31:53.855574    5238 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:31:53.855621    5238 main.go:141] libmachine: Decoding PEM data...
	I0615 10:31:53.855655    5238 main.go:141] libmachine: Parsing certificate...
	I0615 10:31:53.855752    5238 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:31:53.855779    5238 main.go:141] libmachine: Decoding PEM data...
	I0615 10:31:53.855801    5238 main.go:141] libmachine: Parsing certificate...
	I0615 10:31:53.856321    5238 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:31:53.979263    5238 main.go:141] libmachine: Creating SSH key...
	I0615 10:31:54.089301    5238 main.go:141] libmachine: Creating Disk image...
	I0615 10:31:54.089309    5238 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:31:54.089467    5238 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubenet-678000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubenet-678000/disk.qcow2
	I0615 10:31:54.098220    5238 main.go:141] libmachine: STDOUT: 
	I0615 10:31:54.098234    5238 main.go:141] libmachine: STDERR: 
	I0615 10:31:54.098288    5238 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubenet-678000/disk.qcow2 +20000M
	I0615 10:31:54.105345    5238 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:31:54.105363    5238 main.go:141] libmachine: STDERR: 
	I0615 10:31:54.105379    5238 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubenet-678000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubenet-678000/disk.qcow2
	I0615 10:31:54.105385    5238 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:31:54.105432    5238 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubenet-678000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubenet-678000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubenet-678000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:9b:c3:9f:e3:1c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/kubenet-678000/disk.qcow2
	I0615 10:31:54.106982    5238 main.go:141] libmachine: STDOUT: 
	I0615 10:31:54.106999    5238 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:31:54.107011    5238 client.go:171] LocalClient.Create took 251.55175ms
	I0615 10:31:56.109142    5238 start.go:128] duration metric: createHost completed in 2.310039542s
	I0615 10:31:56.109217    5238 start.go:83] releasing machines lock for "kubenet-678000", held for 2.310544209s
	W0615 10:31:56.109695    5238 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-678000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-678000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:31:56.118297    5238 out.go:177] 
	W0615 10:31:56.121402    5238 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:31:56.121427    5238 out.go:239] * 
	* 
	W0615 10:31:56.123974    5238 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 10:31:56.131236    5238 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-252000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-252000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (9.757746709s)

                                                
                                                
-- stdout --
	* [old-k8s-version-252000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node old-k8s-version-252000 in cluster old-k8s-version-252000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-252000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:31:58.293626    5349 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:31:58.293760    5349 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:31:58.293763    5349 out.go:309] Setting ErrFile to fd 2...
	I0615 10:31:58.293765    5349 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:31:58.293837    5349 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:31:58.294903    5349 out.go:303] Setting JSON to false
	I0615 10:31:58.310106    5349 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3689,"bootTime":1686846629,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:31:58.310172    5349 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:31:58.315601    5349 out.go:177] * [old-k8s-version-252000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:31:58.323535    5349 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:31:58.323606    5349 notify.go:220] Checking for updates...
	I0615 10:31:58.326418    5349 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:31:58.329474    5349 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:31:58.332548    5349 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:31:58.333978    5349 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:31:58.336502    5349 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:31:58.339873    5349 config.go:182] Loaded profile config "multinode-506000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:31:58.339913    5349 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:31:58.344362    5349 out.go:177] * Using the qemu2 driver based on user configuration
	I0615 10:31:58.351494    5349 start.go:297] selected driver: qemu2
	I0615 10:31:58.351501    5349 start.go:884] validating driver "qemu2" against <nil>
	I0615 10:31:58.351508    5349 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:31:58.353350    5349 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0615 10:31:58.356509    5349 out.go:177] * Automatically selected the socket_vmnet network
	I0615 10:31:58.359641    5349 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 10:31:58.359659    5349 cni.go:84] Creating CNI manager for ""
	I0615 10:31:58.359665    5349 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0615 10:31:58.359668    5349 start_flags.go:319] config:
	{Name:old-k8s-version-252000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-252000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:31:58.359750    5349 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:31:58.366390    5349 out.go:177] * Starting control plane node old-k8s-version-252000 in cluster old-k8s-version-252000
	I0615 10:31:58.370489    5349 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0615 10:31:58.370511    5349 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0615 10:31:58.370527    5349 cache.go:57] Caching tarball of preloaded images
	I0615 10:31:58.370582    5349 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 10:31:58.370588    5349 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0615 10:31:58.370650    5349 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/old-k8s-version-252000/config.json ...
	I0615 10:31:58.370661    5349 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/old-k8s-version-252000/config.json: {Name:mka7502e7916488fcc81ac283421f9e1802ee19a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:31:58.370860    5349 start.go:365] acquiring machines lock for old-k8s-version-252000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:31:58.370890    5349 start.go:369] acquired machines lock for "old-k8s-version-252000" in 22.791µs
	I0615 10:31:58.370899    5349 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-252000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-252000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:31:58.370937    5349 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:31:58.379540    5349 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0615 10:31:58.396174    5349 start.go:159] libmachine.API.Create for "old-k8s-version-252000" (driver="qemu2")
	I0615 10:31:58.396201    5349 client.go:168] LocalClient.Create starting
	I0615 10:31:58.396261    5349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:31:58.396283    5349 main.go:141] libmachine: Decoding PEM data...
	I0615 10:31:58.396294    5349 main.go:141] libmachine: Parsing certificate...
	I0615 10:31:58.396349    5349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:31:58.396366    5349 main.go:141] libmachine: Decoding PEM data...
	I0615 10:31:58.396372    5349 main.go:141] libmachine: Parsing certificate...
	I0615 10:31:58.397001    5349 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:31:58.514318    5349 main.go:141] libmachine: Creating SSH key...
	I0615 10:31:58.566484    5349 main.go:141] libmachine: Creating Disk image...
	I0615 10:31:58.566490    5349 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:31:58.566631    5349 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/old-k8s-version-252000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/old-k8s-version-252000/disk.qcow2
	I0615 10:31:58.575229    5349 main.go:141] libmachine: STDOUT: 
	I0615 10:31:58.575242    5349 main.go:141] libmachine: STDERR: 
	I0615 10:31:58.575290    5349 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/old-k8s-version-252000/disk.qcow2 +20000M
	I0615 10:31:58.582375    5349 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:31:58.582387    5349 main.go:141] libmachine: STDERR: 
	I0615 10:31:58.582403    5349 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/old-k8s-version-252000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/old-k8s-version-252000/disk.qcow2
	I0615 10:31:58.582409    5349 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:31:58.582443    5349 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/old-k8s-version-252000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/old-k8s-version-252000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/old-k8s-version-252000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:3e:11:8d:61:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/old-k8s-version-252000/disk.qcow2
	I0615 10:31:58.583953    5349 main.go:141] libmachine: STDOUT: 
	I0615 10:31:58.583968    5349 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:31:58.583985    5349 client.go:171] LocalClient.Create took 187.781917ms
	I0615 10:32:00.586110    5349 start.go:128] duration metric: createHost completed in 2.215188125s
	I0615 10:32:00.586167    5349 start.go:83] releasing machines lock for "old-k8s-version-252000", held for 2.215303833s
	W0615 10:32:00.586226    5349 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:32:00.597112    5349 out.go:177] * Deleting "old-k8s-version-252000" in qemu2 ...
	W0615 10:32:00.616522    5349 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:32:00.616549    5349 start.go:687] Will try again in 5 seconds ...
	I0615 10:32:05.618663    5349 start.go:365] acquiring machines lock for old-k8s-version-252000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:32:05.619132    5349 start.go:369] acquired machines lock for "old-k8s-version-252000" in 376.25µs
	I0615 10:32:05.619235    5349 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-252000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-252000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:32:05.619521    5349 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:32:05.628141    5349 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0615 10:32:05.676288    5349 start.go:159] libmachine.API.Create for "old-k8s-version-252000" (driver="qemu2")
	I0615 10:32:05.676346    5349 client.go:168] LocalClient.Create starting
	I0615 10:32:05.676458    5349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:32:05.676497    5349 main.go:141] libmachine: Decoding PEM data...
	I0615 10:32:05.676522    5349 main.go:141] libmachine: Parsing certificate...
	I0615 10:32:05.676599    5349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:32:05.676627    5349 main.go:141] libmachine: Decoding PEM data...
	I0615 10:32:05.676642    5349 main.go:141] libmachine: Parsing certificate...
	I0615 10:32:05.677164    5349 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:32:05.799552    5349 main.go:141] libmachine: Creating SSH key...
	I0615 10:32:05.966267    5349 main.go:141] libmachine: Creating Disk image...
	I0615 10:32:05.966273    5349 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:32:05.966446    5349 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/old-k8s-version-252000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/old-k8s-version-252000/disk.qcow2
	I0615 10:32:05.975205    5349 main.go:141] libmachine: STDOUT: 
	I0615 10:32:05.975220    5349 main.go:141] libmachine: STDERR: 
	I0615 10:32:05.975299    5349 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/old-k8s-version-252000/disk.qcow2 +20000M
	I0615 10:32:05.982486    5349 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:32:05.982496    5349 main.go:141] libmachine: STDERR: 
	I0615 10:32:05.982511    5349 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/old-k8s-version-252000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/old-k8s-version-252000/disk.qcow2
	I0615 10:32:05.982519    5349 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:32:05.982559    5349 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/old-k8s-version-252000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/old-k8s-version-252000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/old-k8s-version-252000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:25:a2:52:7c:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/old-k8s-version-252000/disk.qcow2
	I0615 10:32:05.984123    5349 main.go:141] libmachine: STDOUT: 
	I0615 10:32:05.984137    5349 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:32:05.984148    5349 client.go:171] LocalClient.Create took 307.80175ms
	I0615 10:32:07.986339    5349 start.go:128] duration metric: createHost completed in 2.366825s
	I0615 10:32:07.986409    5349 start.go:83] releasing machines lock for "old-k8s-version-252000", held for 2.367286709s
	W0615 10:32:07.986815    5349 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-252000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-252000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:32:07.995233    5349 out.go:177] 
	W0615 10:32:07.999419    5349 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:32:07.999488    5349 out.go:239] * 
	* 
	W0615 10:32:08.002171    5349 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 10:32:08.010413    5349 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-252000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-252000 -n old-k8s-version-252000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-252000 -n old-k8s-version-252000: exit status 7 (65.628209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-252000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-252000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-252000 create -f testdata/busybox.yaml: exit status 1 (29.669459ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-252000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-252000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-252000 -n old-k8s-version-252000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-252000 -n old-k8s-version-252000: exit status 7 (29.149083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-252000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-252000 -n old-k8s-version-252000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-252000 -n old-k8s-version-252000: exit status 7 (28.870208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-252000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-252000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-252000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-252000 describe deploy/metrics-server -n kube-system: exit status 1 (26.351458ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-252000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-252000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-252000 -n old-k8s-version-252000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-252000 -n old-k8s-version-252000: exit status 7 (29.155458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-252000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-252000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-252000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (5.182758583s)

                                                
                                                
-- stdout --
	* [old-k8s-version-252000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	* Using the qemu2 driver based on existing profile
	* Starting control plane node old-k8s-version-252000 in cluster old-k8s-version-252000
	* Restarting existing qemu2 VM for "old-k8s-version-252000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-252000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:32:08.476206    5381 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:32:08.476288    5381 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:32:08.476290    5381 out.go:309] Setting ErrFile to fd 2...
	I0615 10:32:08.476293    5381 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:32:08.476376    5381 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:32:08.477358    5381 out.go:303] Setting JSON to false
	I0615 10:32:08.492688    5381 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3699,"bootTime":1686846629,"procs":438,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:32:08.492755    5381 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:32:08.497272    5381 out.go:177] * [old-k8s-version-252000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:32:08.503175    5381 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:32:08.507157    5381 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:32:08.503243    5381 notify.go:220] Checking for updates...
	I0615 10:32:08.513144    5381 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:32:08.516183    5381 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:32:08.517575    5381 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:32:08.521147    5381 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:32:08.524467    5381 config.go:182] Loaded profile config "old-k8s-version-252000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0615 10:32:08.528157    5381 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0615 10:32:08.531256    5381 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:32:08.535177    5381 out.go:177] * Using the qemu2 driver based on existing profile
	I0615 10:32:08.542128    5381 start.go:297] selected driver: qemu2
	I0615 10:32:08.542133    5381 start.go:884] validating driver "qemu2" against &{Name:old-k8s-version-252000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-252000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:32:08.542190    5381 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:32:08.544228    5381 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 10:32:08.544251    5381 cni.go:84] Creating CNI manager for ""
	I0615 10:32:08.544257    5381 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0615 10:32:08.544265    5381 start_flags.go:319] config:
	{Name:old-k8s-version-252000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-252000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:32:08.544358    5381 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:32:08.548209    5381 out.go:177] * Starting control plane node old-k8s-version-252000 in cluster old-k8s-version-252000
	I0615 10:32:08.556153    5381 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0615 10:32:08.556183    5381 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0615 10:32:08.556194    5381 cache.go:57] Caching tarball of preloaded images
	I0615 10:32:08.556244    5381 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 10:32:08.556249    5381 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0615 10:32:08.556320    5381 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/old-k8s-version-252000/config.json ...
	I0615 10:32:08.556701    5381 start.go:365] acquiring machines lock for old-k8s-version-252000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:32:08.556728    5381 start.go:369] acquired machines lock for "old-k8s-version-252000" in 21.125µs
	I0615 10:32:08.556737    5381 start.go:96] Skipping create...Using existing machine configuration
	I0615 10:32:08.556741    5381 fix.go:54] fixHost starting: 
	I0615 10:32:08.556865    5381 fix.go:102] recreateIfNeeded on old-k8s-version-252000: state=Stopped err=<nil>
	W0615 10:32:08.556873    5381 fix.go:128] unexpected machine state, will restart: <nil>
	I0615 10:32:08.560122    5381 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-252000" ...
	I0615 10:32:08.568298    5381 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/old-k8s-version-252000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/old-k8s-version-252000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/old-k8s-version-252000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:25:a2:52:7c:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/old-k8s-version-252000/disk.qcow2
	I0615 10:32:08.570149    5381 main.go:141] libmachine: STDOUT: 
	I0615 10:32:08.570163    5381 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:32:08.570193    5381 fix.go:56] fixHost completed within 13.451ms
	I0615 10:32:08.570198    5381 start.go:83] releasing machines lock for "old-k8s-version-252000", held for 13.466458ms
	W0615 10:32:08.570204    5381 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:32:08.570255    5381 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:32:08.570260    5381 start.go:687] Will try again in 5 seconds ...
	I0615 10:32:13.572268    5381 start.go:365] acquiring machines lock for old-k8s-version-252000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:32:13.572355    5381 start.go:369] acquired machines lock for "old-k8s-version-252000" in 63.916µs
	I0615 10:32:13.572389    5381 start.go:96] Skipping create...Using existing machine configuration
	I0615 10:32:13.572393    5381 fix.go:54] fixHost starting: 
	I0615 10:32:13.572559    5381 fix.go:102] recreateIfNeeded on old-k8s-version-252000: state=Stopped err=<nil>
	W0615 10:32:13.572566    5381 fix.go:128] unexpected machine state, will restart: <nil>
	I0615 10:32:13.580491    5381 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-252000" ...
	I0615 10:32:13.587616    5381 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/old-k8s-version-252000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/old-k8s-version-252000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/old-k8s-version-252000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:25:a2:52:7c:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/old-k8s-version-252000/disk.qcow2
	I0615 10:32:13.590323    5381 main.go:141] libmachine: STDOUT: 
	I0615 10:32:13.590344    5381 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:32:13.590365    5381 fix.go:56] fixHost completed within 17.972125ms
	I0615 10:32:13.590371    5381 start.go:83] releasing machines lock for "old-k8s-version-252000", held for 18.002625ms
	W0615 10:32:13.590432    5381 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-252000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-252000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:32:13.603565    5381 out.go:177] 
	W0615 10:32:13.611556    5381 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:32:13.611567    5381 out.go:239] * 
	* 
	W0615 10:32:13.612526    5381 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 10:32:13.623542    5381 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-252000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-252000 -n old-k8s-version-252000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-252000 -n old-k8s-version-252000: exit status 7 (29.834042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-252000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (2.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3342493106.exe start -p stopped-upgrade-297000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3342493106.exe start -p stopped-upgrade-297000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3342493106.exe: permission denied (6.836708ms)
version_upgrade_test.go:195: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3342493106.exe start -p stopped-upgrade-297000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3342493106.exe start -p stopped-upgrade-297000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3342493106.exe: permission denied (5.103ms)
version_upgrade_test.go:195: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3342493106.exe start -p stopped-upgrade-297000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3342493106.exe start -p stopped-upgrade-297000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3342493106.exe: permission denied (6.615917ms)
version_upgrade_test.go:201: legacy v1.6.2 start failed: fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3342493106.exe: permission denied
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (2.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-297000
version_upgrade_test.go:218: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p stopped-upgrade-297000: exit status 85 (132.9875ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-678000 sudo cat                              | bridge-678000          | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                           |                        |         |         |                     |                     |
	| ssh     | -p bridge-678000 sudo                                  | bridge-678000          | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | systemctl status docker --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p bridge-678000 sudo                                  | bridge-678000          | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | systemctl cat docker                                   |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p bridge-678000 sudo cat                              | bridge-678000          | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | /etc/docker/daemon.json                                |                        |         |         |                     |                     |
	| ssh     | -p bridge-678000 sudo docker                           | bridge-678000          | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | system info                                            |                        |         |         |                     |                     |
	| ssh     | -p bridge-678000 sudo                                  | bridge-678000          | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | systemctl status cri-docker                            |                        |         |         |                     |                     |
	|         | --all --full --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p bridge-678000 sudo                                  | bridge-678000          | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | systemctl cat cri-docker                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p bridge-678000 sudo cat                              | bridge-678000          | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf   |                        |         |         |                     |                     |
	| ssh     | -p bridge-678000 sudo cat                              | bridge-678000          | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service             |                        |         |         |                     |                     |
	| ssh     | -p bridge-678000 sudo                                  | bridge-678000          | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | cri-dockerd --version                                  |                        |         |         |                     |                     |
	| ssh     | -p bridge-678000 sudo                                  | bridge-678000          | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | systemctl status containerd                            |                        |         |         |                     |                     |
	|         | --all --full --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p bridge-678000 sudo                                  | bridge-678000          | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | systemctl cat containerd                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p bridge-678000 sudo cat                              | bridge-678000          | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | /lib/systemd/system/containerd.service                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-678000 sudo cat                              | bridge-678000          | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | /etc/containerd/config.toml                            |                        |         |         |                     |                     |
	| ssh     | -p bridge-678000 sudo                                  | bridge-678000          | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-678000 sudo                                  | bridge-678000          | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p bridge-678000 sudo                                  | bridge-678000          | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-678000 sudo find                             | bridge-678000          | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p bridge-678000 sudo crio                             | bridge-678000          | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p bridge-678000                                       | bridge-678000          | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT | 15 Jun 23 10:31 PDT |
	| start   | -p kubenet-678000                                      | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | --memory=3072                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                        |         |         |                     |                     |
	|         | --network-plugin=kubenet                               |                        |         |         |                     |                     |
	|         | --driver=qemu2                                         |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo cat                             | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | /etc/nsswitch.conf                                     |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo cat                             | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | /etc/hosts                                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo cat                             | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | /etc/resolv.conf                                       |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo crictl                          | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | pods                                                   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo crictl                          | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | ps --all                                               |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo find                            | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | /etc/cni -type f -exec sh -c                           |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo ip a s                          | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	| ssh     | -p kubenet-678000 sudo ip r s                          | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	| ssh     | -p kubenet-678000 sudo                                 | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | iptables-save                                          |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo                                 | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | iptables -t nat -L -n -v                               |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo                                 | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | systemctl status kubelet --all                         |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo                                 | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | systemctl cat kubelet                                  |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo                                 | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | journalctl -xeu kubelet --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo cat                             | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                           |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo cat                             | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                           |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo                                 | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | systemctl status docker --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo                                 | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | systemctl cat docker                                   |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo cat                             | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | /etc/docker/daemon.json                                |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo docker                          | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | system info                                            |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo                                 | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | systemctl status cri-docker                            |                        |         |         |                     |                     |
	|         | --all --full --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo                                 | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | systemctl cat cri-docker                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo cat                             | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo cat                             | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo                                 | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | cri-dockerd --version                                  |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo                                 | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | systemctl status containerd                            |                        |         |         |                     |                     |
	|         | --all --full --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo                                 | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | systemctl cat containerd                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo cat                             | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | /lib/systemd/system/containerd.service                 |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo cat                             | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | /etc/containerd/config.toml                            |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo                                 | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo                                 | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo                                 | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo find                            | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-678000 sudo crio                            | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p kubenet-678000                                      | kubenet-678000         | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT | 15 Jun 23 10:31 PDT |
	| start   | -p old-k8s-version-252000                              | old-k8s-version-252000 | jenkins | v1.30.1 | 15 Jun 23 10:31 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=qemu2                                         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-252000        | old-k8s-version-252000 | jenkins | v1.30.1 | 15 Jun 23 10:32 PDT | 15 Jun 23 10:32 PDT |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-252000                              | old-k8s-version-252000 | jenkins | v1.30.1 | 15 Jun 23 10:32 PDT | 15 Jun 23 10:32 PDT |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-252000             | old-k8s-version-252000 | jenkins | v1.30.1 | 15 Jun 23 10:32 PDT | 15 Jun 23 10:32 PDT |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-252000                              | old-k8s-version-252000 | jenkins | v1.30.1 | 15 Jun 23 10:32 PDT |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=qemu2                                         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/15 10:32:08
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0615 10:32:08.476206    5381 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:32:08.476288    5381 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:32:08.476290    5381 out.go:309] Setting ErrFile to fd 2...
	I0615 10:32:08.476293    5381 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:32:08.476376    5381 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:32:08.477358    5381 out.go:303] Setting JSON to false
	I0615 10:32:08.492688    5381 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3699,"bootTime":1686846629,"procs":438,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:32:08.492755    5381 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:32:08.497272    5381 out.go:177] * [old-k8s-version-252000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:32:08.503175    5381 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:32:08.507157    5381 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:32:08.503243    5381 notify.go:220] Checking for updates...
	I0615 10:32:08.513144    5381 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:32:08.516183    5381 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:32:08.517575    5381 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:32:08.521147    5381 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:32:08.524467    5381 config.go:182] Loaded profile config "old-k8s-version-252000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0615 10:32:08.528157    5381 out.go:177] * Kubernetes 1.27.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.3
	I0615 10:32:08.531256    5381 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:32:08.535177    5381 out.go:177] * Using the qemu2 driver based on existing profile
	I0615 10:32:08.542128    5381 start.go:297] selected driver: qemu2
	I0615 10:32:08.542133    5381 start.go:884] validating driver "qemu2" against &{Name:old-k8s-version-252000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-252000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:32:08.542190    5381 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:32:08.544228    5381 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 10:32:08.544251    5381 cni.go:84] Creating CNI manager for ""
	I0615 10:32:08.544257    5381 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0615 10:32:08.544265    5381 start_flags.go:319] config:
	{Name:old-k8s-version-252000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-252000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:32:08.544358    5381 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:32:08.548209    5381 out.go:177] * Starting control plane node old-k8s-version-252000 in cluster old-k8s-version-252000
	I0615 10:32:08.556153    5381 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0615 10:32:08.556183    5381 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0615 10:32:08.556194    5381 cache.go:57] Caching tarball of preloaded images
	I0615 10:32:08.556244    5381 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 10:32:08.556249    5381 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0615 10:32:08.556320    5381 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/old-k8s-version-252000/config.json ...
	I0615 10:32:08.556701    5381 start.go:365] acquiring machines lock for old-k8s-version-252000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:32:08.556728    5381 start.go:369] acquired machines lock for "old-k8s-version-252000" in 21.125µs
	I0615 10:32:08.556737    5381 start.go:96] Skipping create...Using existing machine configuration
	I0615 10:32:08.556741    5381 fix.go:54] fixHost starting: 
	I0615 10:32:08.556865    5381 fix.go:102] recreateIfNeeded on old-k8s-version-252000: state=Stopped err=<nil>
	W0615 10:32:08.556873    5381 fix.go:128] unexpected machine state, will restart: <nil>
	I0615 10:32:08.560122    5381 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-252000" ...
	I0615 10:32:08.568298    5381 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/old-k8s-version-252000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/old-k8s-version-252000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/old-k8s-version-252000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:25:a2:52:7c:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/old-k8s-version-252000/disk.qcow2
	I0615 10:32:08.570149    5381 main.go:141] libmachine: STDOUT: 
	I0615 10:32:08.570163    5381 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:32:08.570193    5381 fix.go:56] fixHost completed within 13.451ms
	I0615 10:32:08.570198    5381 start.go:83] releasing machines lock for "old-k8s-version-252000", held for 13.466458ms
	W0615 10:32:08.570204    5381 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:32:08.570255    5381 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:32:08.570260    5381 start.go:687] Will try again in 5 seconds ...
	
	* 
	* Profile "stopped-upgrade-297000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p stopped-upgrade-297000"

                                                
                                                
-- /stdout --
version_upgrade_test.go:220: `minikube logs` after upgrade to HEAD from v1.6.2 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-252000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-252000 -n old-k8s-version-252000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-252000 -n old-k8s-version-252000: exit status 7 (30.411583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-252000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-252000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-252000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-252000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.405917ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-252000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-252000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-252000 -n old-k8s-version-252000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-252000 -n old-k8s-version-252000: exit status 7 (28.370625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-252000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p old-k8s-version-252000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p old-k8s-version-252000 "sudo crictl images -o json": exit status 89 (41.027792ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-252000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p old-k8s-version-252000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p old-k8s-version-252000"
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-252000 -n old-k8s-version-252000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-252000 -n old-k8s-version-252000: exit status 7 (29.707709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-252000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-252000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-252000 --alsologtostderr -v=1: exit status 89 (45.833334ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-252000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:32:13.850089    5415 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:32:13.850437    5415 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:32:13.850441    5415 out.go:309] Setting ErrFile to fd 2...
	I0615 10:32:13.850443    5415 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:32:13.850527    5415 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:32:13.850727    5415 out.go:303] Setting JSON to false
	I0615 10:32:13.850737    5415 mustload.go:65] Loading cluster: old-k8s-version-252000
	I0615 10:32:13.850903    5415 config.go:182] Loaded profile config "old-k8s-version-252000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0615 10:32:13.854562    5415 out.go:177] * The control plane node must be running for this command
	I0615 10:32:13.862543    5415 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-252000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-252000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-252000 -n old-k8s-version-252000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-252000 -n old-k8s-version-252000: exit status 7 (29.81125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-252000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-252000 -n old-k8s-version-252000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-252000 -n old-k8s-version-252000: exit status 7 (29.682208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-252000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-418000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-418000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (9.873760292s)

                                                
                                                
-- stdout --
	* [embed-certs-418000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node embed-certs-418000 in cluster embed-certs-418000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-418000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:32:13.940412    5423 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:32:13.940527    5423 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:32:13.940529    5423 out.go:309] Setting ErrFile to fd 2...
	I0615 10:32:13.940532    5423 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:32:13.940632    5423 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:32:13.942042    5423 out.go:303] Setting JSON to false
	I0615 10:32:13.959169    5423 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3704,"bootTime":1686846629,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:32:13.959228    5423 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:32:13.963591    5423 out.go:177] * [embed-certs-418000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:32:13.968491    5423 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:32:13.968573    5423 notify.go:220] Checking for updates...
	I0615 10:32:13.972520    5423 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:32:13.976508    5423 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:32:13.980562    5423 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:32:13.983537    5423 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:32:13.990495    5423 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:32:13.994835    5423 config.go:182] Loaded profile config "multinode-506000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:32:13.994903    5423 config.go:182] Loaded profile config "old-k8s-version-252000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0615 10:32:13.994949    5423 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:32:13.998513    5423 out.go:177] * Using the qemu2 driver based on user configuration
	I0615 10:32:14.004458    5423 start.go:297] selected driver: qemu2
	I0615 10:32:14.004463    5423 start.go:884] validating driver "qemu2" against <nil>
	I0615 10:32:14.004467    5423 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:32:14.006274    5423 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0615 10:32:14.012388    5423 out.go:177] * Automatically selected the socket_vmnet network
	I0615 10:32:14.018567    5423 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 10:32:14.018583    5423 cni.go:84] Creating CNI manager for ""
	I0615 10:32:14.018589    5423 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 10:32:14.018594    5423 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0615 10:32:14.018601    5423 start_flags.go:319] config:
	{Name:embed-certs-418000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-418000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:32:14.018681    5423 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:32:14.025522    5423 out.go:177] * Starting control plane node embed-certs-418000 in cluster embed-certs-418000
	I0615 10:32:14.029601    5423 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 10:32:14.029621    5423 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 10:32:14.029631    5423 cache.go:57] Caching tarball of preloaded images
	I0615 10:32:14.029700    5423 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 10:32:14.029705    5423 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 10:32:14.029760    5423 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/embed-certs-418000/config.json ...
	I0615 10:32:14.029778    5423 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/embed-certs-418000/config.json: {Name:mke6380297d4820c1382d2cecfce98b5c42adc23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:32:14.030443    5423 start.go:365] acquiring machines lock for embed-certs-418000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:32:14.030480    5423 start.go:369] acquired machines lock for "embed-certs-418000" in 30.25µs
	I0615 10:32:14.030490    5423 start.go:93] Provisioning new machine with config: &{Name:embed-certs-418000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-418000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:32:14.030512    5423 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:32:14.044545    5423 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0615 10:32:14.059920    5423 start.go:159] libmachine.API.Create for "embed-certs-418000" (driver="qemu2")
	I0615 10:32:14.059955    5423 client.go:168] LocalClient.Create starting
	I0615 10:32:14.060972    5423 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:32:14.061000    5423 main.go:141] libmachine: Decoding PEM data...
	I0615 10:32:14.061009    5423 main.go:141] libmachine: Parsing certificate...
	I0615 10:32:14.061060    5423 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:32:14.061081    5423 main.go:141] libmachine: Decoding PEM data...
	I0615 10:32:14.061089    5423 main.go:141] libmachine: Parsing certificate...
	I0615 10:32:14.061421    5423 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:32:14.212704    5423 main.go:141] libmachine: Creating SSH key...
	I0615 10:32:14.336477    5423 main.go:141] libmachine: Creating Disk image...
	I0615 10:32:14.336486    5423 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:32:14.336658    5423 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/embed-certs-418000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/embed-certs-418000/disk.qcow2
	I0615 10:32:14.346131    5423 main.go:141] libmachine: STDOUT: 
	I0615 10:32:14.346150    5423 main.go:141] libmachine: STDERR: 
	I0615 10:32:14.346210    5423 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/embed-certs-418000/disk.qcow2 +20000M
	I0615 10:32:14.354259    5423 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:32:14.354276    5423 main.go:141] libmachine: STDERR: 
	I0615 10:32:14.354295    5423 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/embed-certs-418000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/embed-certs-418000/disk.qcow2
	I0615 10:32:14.354307    5423 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:32:14.354349    5423 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/embed-certs-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/embed-certs-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/embed-certs-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:f4:e0:ff:c7:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/embed-certs-418000/disk.qcow2
	I0615 10:32:14.356010    5423 main.go:141] libmachine: STDOUT: 
	I0615 10:32:14.356029    5423 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:32:14.356054    5423 client.go:171] LocalClient.Create took 295.918875ms
	I0615 10:32:16.358104    5423 start.go:128] duration metric: createHost completed in 2.327618667s
	I0615 10:32:16.358154    5423 start.go:83] releasing machines lock for "embed-certs-418000", held for 2.327705917s
	W0615 10:32:16.358184    5423 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:32:16.375670    5423 out.go:177] * Deleting "embed-certs-418000" in qemu2 ...
	W0615 10:32:16.388292    5423 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:32:16.388302    5423 start.go:687] Will try again in 5 seconds ...
	I0615 10:32:21.388531    5423 start.go:365] acquiring machines lock for embed-certs-418000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:32:21.389176    5423 start.go:369] acquired machines lock for "embed-certs-418000" in 548.916µs
	I0615 10:32:21.389314    5423 start.go:93] Provisioning new machine with config: &{Name:embed-certs-418000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-418000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:32:21.389713    5423 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:32:21.395289    5423 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0615 10:32:21.443762    5423 start.go:159] libmachine.API.Create for "embed-certs-418000" (driver="qemu2")
	I0615 10:32:21.443798    5423 client.go:168] LocalClient.Create starting
	I0615 10:32:21.443949    5423 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:32:21.443990    5423 main.go:141] libmachine: Decoding PEM data...
	I0615 10:32:21.444006    5423 main.go:141] libmachine: Parsing certificate...
	I0615 10:32:21.444080    5423 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:32:21.444113    5423 main.go:141] libmachine: Decoding PEM data...
	I0615 10:32:21.444129    5423 main.go:141] libmachine: Parsing certificate...
	I0615 10:32:21.444612    5423 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:32:21.569113    5423 main.go:141] libmachine: Creating SSH key...
	I0615 10:32:21.722012    5423 main.go:141] libmachine: Creating Disk image...
	I0615 10:32:21.722021    5423 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:32:21.722170    5423 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/embed-certs-418000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/embed-certs-418000/disk.qcow2
	I0615 10:32:21.730921    5423 main.go:141] libmachine: STDOUT: 
	I0615 10:32:21.730934    5423 main.go:141] libmachine: STDERR: 
	I0615 10:32:21.730981    5423 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/embed-certs-418000/disk.qcow2 +20000M
	I0615 10:32:21.738257    5423 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:32:21.738274    5423 main.go:141] libmachine: STDERR: 
	I0615 10:32:21.738287    5423 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/embed-certs-418000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/embed-certs-418000/disk.qcow2
	I0615 10:32:21.738292    5423 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:32:21.738331    5423 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/embed-certs-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/embed-certs-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/embed-certs-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:6a:ba:13:a5:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/embed-certs-418000/disk.qcow2
	I0615 10:32:21.739901    5423 main.go:141] libmachine: STDOUT: 
	I0615 10:32:21.739913    5423 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:32:21.739924    5423 client.go:171] LocalClient.Create took 296.127167ms
	I0615 10:32:23.742068    5423 start.go:128] duration metric: createHost completed in 2.352368875s
	I0615 10:32:23.742110    5423 start.go:83] releasing machines lock for "embed-certs-418000", held for 2.3529485s
	W0615 10:32:23.742497    5423 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-418000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-418000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:32:23.757102    5423 out.go:177] 
	W0615 10:32:23.765254    5423 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:32:23.765308    5423 out.go:239] * 
	* 
	W0615 10:32:23.767530    5423 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 10:32:23.776094    5423 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-418000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-418000 -n embed-certs-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-418000 -n embed-certs-418000: exit status 7 (48.747083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (11.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-084000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-084000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (11.542212125s)

                                                
                                                
-- stdout --
	* [no-preload-084000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node no-preload-084000 in cluster no-preload-084000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-084000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:32:14.625994    5462 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:32:14.626113    5462 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:32:14.626116    5462 out.go:309] Setting ErrFile to fd 2...
	I0615 10:32:14.626118    5462 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:32:14.626189    5462 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:32:14.627227    5462 out.go:303] Setting JSON to false
	I0615 10:32:14.642409    5462 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3705,"bootTime":1686846629,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:32:14.642490    5462 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:32:14.647438    5462 out.go:177] * [no-preload-084000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:32:14.654351    5462 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:32:14.657297    5462 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:32:14.654389    5462 notify.go:220] Checking for updates...
	I0615 10:32:14.664334    5462 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:32:14.667326    5462 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:32:14.670333    5462 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:32:14.673357    5462 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:32:14.675051    5462 config.go:182] Loaded profile config "embed-certs-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:32:14.675118    5462 config.go:182] Loaded profile config "multinode-506000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:32:14.675157    5462 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:32:14.679429    5462 out.go:177] * Using the qemu2 driver based on user configuration
	I0615 10:32:14.686129    5462 start.go:297] selected driver: qemu2
	I0615 10:32:14.686134    5462 start.go:884] validating driver "qemu2" against <nil>
	I0615 10:32:14.686141    5462 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:32:14.688245    5462 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0615 10:32:14.691318    5462 out.go:177] * Automatically selected the socket_vmnet network
	I0615 10:32:14.694471    5462 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 10:32:14.694502    5462 cni.go:84] Creating CNI manager for ""
	I0615 10:32:14.694511    5462 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 10:32:14.694515    5462 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0615 10:32:14.694521    5462 start_flags.go:319] config:
	{Name:no-preload-084000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-084000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:32:14.694632    5462 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:32:14.698324    5462 out.go:177] * Starting control plane node no-preload-084000 in cluster no-preload-084000
	I0615 10:32:14.706343    5462 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 10:32:14.706418    5462 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/no-preload-084000/config.json ...
	I0615 10:32:14.706435    5462 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/no-preload-084000/config.json: {Name:mkfe28bf789d2e5d96f2c4490041688f3a9f45f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:32:14.706465    5462 cache.go:107] acquiring lock: {Name:mkb251ff5edae426ab2aa5dafd3340c322e8c0bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:32:14.706475    5462 cache.go:107] acquiring lock: {Name:mk52793308e31a1ac240f8367a2974680dce35df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:32:14.706489    5462 cache.go:107] acquiring lock: {Name:mka275704b25bc177472128b3283fdc554917910 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:32:14.706539    5462 cache.go:107] acquiring lock: {Name:mk3ffc47e995356708f0d33f4de28536c25fc4c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:32:14.706637    5462 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.27.3
	I0615 10:32:14.706678    5462 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0615 10:32:14.706658    5462 cache.go:107] acquiring lock: {Name:mk3cbac49c2f017a1e53f647a30c359555de62f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:32:14.706699    5462 start.go:365] acquiring machines lock for no-preload-084000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:32:14.706681    5462 cache.go:107] acquiring lock: {Name:mk099f7da13518f52997fad5c1b1c7501fe20270 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:32:14.706696    5462 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0615 10:32:14.706777    5462 cache.go:107] acquiring lock: {Name:mk7ff8a53fc8db6452ecd864043d5edd7eb8a342 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:32:14.706778    5462 cache.go:115] /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0615 10:32:14.706791    5462 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 330µs
	I0615 10:32:14.706800    5462 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0615 10:32:14.706703    5462 cache.go:107] acquiring lock: {Name:mkb5fd9c53c6e84be4b85c78a1ee29c50a631291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:32:14.706842    5462 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0615 10:32:14.706895    5462 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.27.3
	I0615 10:32:14.706955    5462 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.27.3
	I0615 10:32:14.707040    5462 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0615 10:32:14.712823    5462 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.27.3
	I0615 10:32:14.716204    5462 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0615 10:32:14.716268    5462 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.27.3
	I0615 10:32:14.716293    5462 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.27.3
	I0615 10:32:14.716402    5462 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.27.3: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.27.3
	I0615 10:32:14.717022    5462 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0615 10:32:14.717081    5462 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0615 10:32:15.896414    5462 cache.go:162] opening:  /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3
	I0615 10:32:15.915489    5462 cache.go:162] opening:  /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0
	I0615 10:32:15.959318    5462 cache.go:162] opening:  /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3
	I0615 10:32:16.120841    5462 cache.go:162] opening:  /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3
	I0615 10:32:16.193142    5462 cache.go:162] opening:  /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3
	I0615 10:32:16.338637    5462 cache.go:162] opening:  /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0615 10:32:16.358226    5462 start.go:369] acquired machines lock for "no-preload-084000" in 1.651541167s
	I0615 10:32:16.358266    5462 start.go:93] Provisioning new machine with config: &{Name:no-preload-084000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-084000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:32:16.358382    5462 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:32:16.367694    5462 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0615 10:32:16.396055    5462 start.go:159] libmachine.API.Create for "no-preload-084000" (driver="qemu2")
	I0615 10:32:16.396081    5462 client.go:168] LocalClient.Create starting
	I0615 10:32:16.396152    5462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:32:16.396183    5462 main.go:141] libmachine: Decoding PEM data...
	I0615 10:32:16.396199    5462 main.go:141] libmachine: Parsing certificate...
	I0615 10:32:16.396252    5462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:32:16.396274    5462 main.go:141] libmachine: Decoding PEM data...
	I0615 10:32:16.396285    5462 main.go:141] libmachine: Parsing certificate...
	I0615 10:32:16.396685    5462 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:32:16.486153    5462 cache.go:157] /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0615 10:32:16.486172    5462 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 1.779661666s
	I0615 10:32:16.486178    5462 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0615 10:32:16.517494    5462 main.go:141] libmachine: Creating SSH key...
	I0615 10:32:16.569572    5462 cache.go:162] opening:  /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0615 10:32:16.583827    5462 main.go:141] libmachine: Creating Disk image...
	I0615 10:32:16.583835    5462 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:32:16.583976    5462 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/no-preload-084000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/no-preload-084000/disk.qcow2
	I0615 10:32:16.593031    5462 main.go:141] libmachine: STDOUT: 
	I0615 10:32:16.593044    5462 main.go:141] libmachine: STDERR: 
	I0615 10:32:16.593088    5462 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/no-preload-084000/disk.qcow2 +20000M
	I0615 10:32:16.600442    5462 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:32:16.600453    5462 main.go:141] libmachine: STDERR: 
	I0615 10:32:16.600469    5462 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/no-preload-084000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/no-preload-084000/disk.qcow2
	I0615 10:32:16.600476    5462 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:32:16.600509    5462 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/no-preload-084000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/no-preload-084000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/no-preload-084000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:80:f4:b7:26:1a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/no-preload-084000/disk.qcow2
	I0615 10:32:16.602071    5462 main.go:141] libmachine: STDOUT: 
	I0615 10:32:16.602083    5462 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:32:16.602101    5462 client.go:171] LocalClient.Create took 206.016625ms
	I0615 10:32:18.545154    5462 cache.go:157] /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0615 10:32:18.545220    5462 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 3.838674834s
	I0615 10:32:18.545269    5462 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0615 10:32:18.602335    5462 start.go:128] duration metric: createHost completed in 2.243954875s
	I0615 10:32:18.602372    5462 start.go:83] releasing machines lock for "no-preload-084000", held for 2.244159375s
	W0615 10:32:18.602439    5462 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:32:18.610562    5462 out.go:177] * Deleting "no-preload-084000" in qemu2 ...
	W0615 10:32:18.630231    5462 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:32:18.630270    5462 start.go:687] Will try again in 5 seconds ...
	I0615 10:32:19.369940    5462 cache.go:157] /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3 exists
	I0615 10:32:19.370019    5462 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.27.3" -> "/Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3" took 4.663404625s
	I0615 10:32:19.370046    5462 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.27.3 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3 succeeded
	I0615 10:32:19.916798    5462 cache.go:157] /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3 exists
	I0615 10:32:19.916875    5462 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.27.3" -> "/Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3" took 5.210251417s
	I0615 10:32:19.916910    5462 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.27.3 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3 succeeded
	I0615 10:32:19.938640    5462 cache.go:157] /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3 exists
	I0615 10:32:19.938684    5462 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.27.3" -> "/Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3" took 5.23229625s
	I0615 10:32:19.938708    5462 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.27.3 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3 succeeded
	I0615 10:32:20.174450    5462 cache.go:157] /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3 exists
	I0615 10:32:20.174487    5462 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.27.3" -> "/Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3" took 5.468103917s
	I0615 10:32:20.174511    5462 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.27.3 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3 succeeded
	I0615 10:32:23.630388    5462 start.go:365] acquiring machines lock for no-preload-084000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:32:23.742230    5462 start.go:369] acquired machines lock for "no-preload-084000" in 111.744208ms
	I0615 10:32:23.742389    5462 start.go:93] Provisioning new machine with config: &{Name:no-preload-084000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-084000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:32:23.742652    5462 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:32:23.753080    5462 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0615 10:32:23.798529    5462 start.go:159] libmachine.API.Create for "no-preload-084000" (driver="qemu2")
	I0615 10:32:23.798571    5462 client.go:168] LocalClient.Create starting
	I0615 10:32:23.798693    5462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:32:23.798734    5462 main.go:141] libmachine: Decoding PEM data...
	I0615 10:32:23.798753    5462 main.go:141] libmachine: Parsing certificate...
	I0615 10:32:23.798849    5462 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:32:23.798884    5462 main.go:141] libmachine: Decoding PEM data...
	I0615 10:32:23.798897    5462 main.go:141] libmachine: Parsing certificate...
	I0615 10:32:23.799466    5462 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:32:23.932877    5462 main.go:141] libmachine: Creating SSH key...
	I0615 10:32:24.069760    5462 main.go:141] libmachine: Creating Disk image...
	I0615 10:32:24.069768    5462 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:32:24.069909    5462 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/no-preload-084000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/no-preload-084000/disk.qcow2
	I0615 10:32:24.082262    5462 main.go:141] libmachine: STDOUT: 
	I0615 10:32:24.082280    5462 main.go:141] libmachine: STDERR: 
	I0615 10:32:24.082333    5462 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/no-preload-084000/disk.qcow2 +20000M
	I0615 10:32:24.091824    5462 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:32:24.091840    5462 main.go:141] libmachine: STDERR: 
	I0615 10:32:24.091855    5462 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/no-preload-084000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/no-preload-084000/disk.qcow2
	I0615 10:32:24.091865    5462 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:32:24.091904    5462 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/no-preload-084000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/no-preload-084000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/no-preload-084000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:14:e8:b8:55:93 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/no-preload-084000/disk.qcow2
	I0615 10:32:24.093698    5462 main.go:141] libmachine: STDOUT: 
	I0615 10:32:24.093713    5462 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:32:24.093723    5462 client.go:171] LocalClient.Create took 295.152167ms
	I0615 10:32:25.609886    5462 cache.go:157] /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 exists
	I0615 10:32:25.609964    5462 cache.go:96] cache image "registry.k8s.io/etcd:3.5.7-0" -> "/Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0" took 10.903519666s
	I0615 10:32:25.609997    5462 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.7-0 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 succeeded
	I0615 10:32:25.610040    5462 cache.go:87] Successfully saved all images to host disk.
	I0615 10:32:26.095946    5462 start.go:128] duration metric: createHost completed in 2.353294042s
	I0615 10:32:26.095986    5462 start.go:83] releasing machines lock for "no-preload-084000", held for 2.353766333s
	W0615 10:32:26.096185    5462 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-084000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-084000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:32:26.108643    5462 out.go:177] 
	W0615 10:32:26.113703    5462 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:32:26.113734    5462 out.go:239] * 
	* 
	W0615 10:32:26.116199    5462 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 10:32:26.124619    5462 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-084000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-084000 -n no-preload-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-084000 -n no-preload-084000: exit status 7 (59.48825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (11.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-418000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-418000 create -f testdata/busybox.yaml: exit status 1 (30.629375ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-418000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-418000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-418000 -n embed-certs-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-418000 -n embed-certs-418000: exit status 7 (31.916166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-418000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-418000 -n embed-certs-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-418000 -n embed-certs-418000: exit status 7 (32.291709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-418000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-418000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-418000 describe deploy/metrics-server -n kube-system: exit status 1 (26.078542ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-418000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-418000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-418000 -n embed-certs-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-418000 -n embed-certs-418000: exit status 7 (28.820166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (6.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-418000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-418000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (6.89238125s)

                                                
                                                
-- stdout --
	* [embed-certs-418000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node embed-certs-418000 in cluster embed-certs-418000
	* Restarting existing qemu2 VM for "embed-certs-418000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-418000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:32:24.321597    5597 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:32:24.321714    5597 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:32:24.321717    5597 out.go:309] Setting ErrFile to fd 2...
	I0615 10:32:24.321720    5597 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:32:24.321790    5597 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:32:24.322765    5597 out.go:303] Setting JSON to false
	I0615 10:32:24.337869    5597 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3715,"bootTime":1686846629,"procs":441,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:32:24.337946    5597 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:32:24.343036    5597 out.go:177] * [embed-certs-418000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:32:24.353980    5597 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:32:24.349998    5597 notify.go:220] Checking for updates...
	I0615 10:32:24.360945    5597 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:32:24.364011    5597 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:32:24.367005    5597 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:32:24.371976    5597 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:32:24.379827    5597 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:32:24.384248    5597 config.go:182] Loaded profile config "embed-certs-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:32:24.384506    5597 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:32:24.387981    5597 out.go:177] * Using the qemu2 driver based on existing profile
	I0615 10:32:24.394959    5597 start.go:297] selected driver: qemu2
	I0615 10:32:24.394964    5597 start.go:884] validating driver "qemu2" against &{Name:embed-certs-418000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-418000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:32:24.395014    5597 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:32:24.396916    5597 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 10:32:24.396947    5597 cni.go:84] Creating CNI manager for ""
	I0615 10:32:24.396953    5597 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 10:32:24.396958    5597 start_flags.go:319] config:
	{Name:embed-certs-418000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:embed-certs-418000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:32:24.397047    5597 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:32:24.404978    5597 out.go:177] * Starting control plane node embed-certs-418000 in cluster embed-certs-418000
	I0615 10:32:24.409031    5597 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 10:32:24.409057    5597 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 10:32:24.409068    5597 cache.go:57] Caching tarball of preloaded images
	I0615 10:32:24.409122    5597 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 10:32:24.409128    5597 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 10:32:24.409184    5597 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/embed-certs-418000/config.json ...
	I0615 10:32:24.409435    5597 start.go:365] acquiring machines lock for embed-certs-418000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:32:26.096140    5597 start.go:369] acquired machines lock for "embed-certs-418000" in 1.686706917s
	I0615 10:32:26.096304    5597 start.go:96] Skipping create...Using existing machine configuration
	I0615 10:32:26.096324    5597 fix.go:54] fixHost starting: 
	I0615 10:32:26.097018    5597 fix.go:102] recreateIfNeeded on embed-certs-418000: state=Stopped err=<nil>
	W0615 10:32:26.097056    5597 fix.go:128] unexpected machine state, will restart: <nil>
	I0615 10:32:26.108640    5597 out.go:177] * Restarting existing qemu2 VM for "embed-certs-418000" ...
	I0615 10:32:26.115271    5597 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/embed-certs-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/embed-certs-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/embed-certs-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:6a:ba:13:a5:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/embed-certs-418000/disk.qcow2
	I0615 10:32:26.124362    5597 main.go:141] libmachine: STDOUT: 
	I0615 10:32:26.124416    5597 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:32:26.124535    5597 fix.go:56] fixHost completed within 28.208583ms
	I0615 10:32:26.124553    5597 start.go:83] releasing machines lock for "embed-certs-418000", held for 28.366209ms
	W0615 10:32:26.124588    5597 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:32:26.124719    5597 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:32:26.124734    5597 start.go:687] Will try again in 5 seconds ...
	I0615 10:32:31.126858    5597 start.go:365] acquiring machines lock for embed-certs-418000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:32:31.127482    5597 start.go:369] acquired machines lock for "embed-certs-418000" in 463.208µs
	I0615 10:32:31.127670    5597 start.go:96] Skipping create...Using existing machine configuration
	I0615 10:32:31.127696    5597 fix.go:54] fixHost starting: 
	I0615 10:32:31.128473    5597 fix.go:102] recreateIfNeeded on embed-certs-418000: state=Stopped err=<nil>
	W0615 10:32:31.128498    5597 fix.go:128] unexpected machine state, will restart: <nil>
	I0615 10:32:31.133873    5597 out.go:177] * Restarting existing qemu2 VM for "embed-certs-418000" ...
	I0615 10:32:31.140159    5597 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/embed-certs-418000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/embed-certs-418000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/embed-certs-418000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:6a:ba:13:a5:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/embed-certs-418000/disk.qcow2
	I0615 10:32:31.149208    5597 main.go:141] libmachine: STDOUT: 
	I0615 10:32:31.149273    5597 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:32:31.149387    5597 fix.go:56] fixHost completed within 21.696959ms
	I0615 10:32:31.149411    5597 start.go:83] releasing machines lock for "embed-certs-418000", held for 21.88225ms
	W0615 10:32:31.149670    5597 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-418000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-418000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:32:31.157943    5597 out.go:177] 
	W0615 10:32:31.161095    5597 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:32:31.161147    5597 out.go:239] * 
	* 
	W0615 10:32:31.164077    5597 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 10:32:31.175036    5597 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-418000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-418000 -n embed-certs-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-418000 -n embed-certs-418000: exit status 7 (67.58775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (6.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-084000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-084000 create -f testdata/busybox.yaml: exit status 1 (29.807084ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-084000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-084000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-084000 -n no-preload-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-084000 -n no-preload-084000: exit status 7 (28.308416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-084000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-084000 -n no-preload-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-084000 -n no-preload-084000: exit status 7 (28.323417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-084000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-084000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-084000 describe deploy/metrics-server -n kube-system: exit status 1 (26.264709ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-084000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-084000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-084000 -n no-preload-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-084000 -n no-preload-084000: exit status 7 (27.942292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-084000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-084000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (5.158140459s)

                                                
                                                
-- stdout --
	* [no-preload-084000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node no-preload-084000 in cluster no-preload-084000
	* Restarting existing qemu2 VM for "no-preload-084000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-084000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:32:26.586801    5622 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:32:26.586933    5622 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:32:26.586936    5622 out.go:309] Setting ErrFile to fd 2...
	I0615 10:32:26.586938    5622 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:32:26.587005    5622 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:32:26.587965    5622 out.go:303] Setting JSON to false
	I0615 10:32:26.602948    5622 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3717,"bootTime":1686846629,"procs":441,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:32:26.603031    5622 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:32:26.607826    5622 out.go:177] * [no-preload-084000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:32:26.614814    5622 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:32:26.618828    5622 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:32:26.614914    5622 notify.go:220] Checking for updates...
	I0615 10:32:26.621834    5622 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:32:26.624985    5622 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:32:26.627793    5622 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:32:26.630814    5622 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:32:26.634067    5622 config.go:182] Loaded profile config "no-preload-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:32:26.634330    5622 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:32:26.638753    5622 out.go:177] * Using the qemu2 driver based on existing profile
	I0615 10:32:26.644716    5622 start.go:297] selected driver: qemu2
	I0615 10:32:26.644721    5622 start.go:884] validating driver "qemu2" against &{Name:no-preload-084000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-084000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:32:26.644776    5622 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:32:26.646668    5622 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 10:32:26.646694    5622 cni.go:84] Creating CNI manager for ""
	I0615 10:32:26.646700    5622 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 10:32:26.646705    5622 start_flags.go:319] config:
	{Name:no-preload-084000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:no-preload-084000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:32:26.646779    5622 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:32:26.653813    5622 out.go:177] * Starting control plane node no-preload-084000 in cluster no-preload-084000
	I0615 10:32:26.657811    5622 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 10:32:26.657901    5622 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/no-preload-084000/config.json ...
	I0615 10:32:26.657945    5622 cache.go:107] acquiring lock: {Name:mka275704b25bc177472128b3283fdc554917910 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:32:26.657950    5622 cache.go:107] acquiring lock: {Name:mkb5fd9c53c6e84be4b85c78a1ee29c50a631291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:32:26.657943    5622 cache.go:107] acquiring lock: {Name:mkb251ff5edae426ab2aa5dafd3340c322e8c0bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:32:26.657978    5622 cache.go:107] acquiring lock: {Name:mk52793308e31a1ac240f8367a2974680dce35df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:32:26.657978    5622 cache.go:107] acquiring lock: {Name:mk3ffc47e995356708f0d33f4de28536c25fc4c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:32:26.657984    5622 cache.go:107] acquiring lock: {Name:mk7ff8a53fc8db6452ecd864043d5edd7eb8a342 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:32:26.657991    5622 cache.go:107] acquiring lock: {Name:mk099f7da13518f52997fad5c1b1c7501fe20270 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:32:26.658034    5622 cache.go:115] /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3 exists
	I0615 10:32:26.658039    5622 cache.go:115] /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3 exists
	I0615 10:32:26.658041    5622 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.27.3" -> "/Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3" took 115µs
	I0615 10:32:26.658044    5622 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.27.3" -> "/Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3" took 66.209µs
	I0615 10:32:26.658048    5622 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.27.3 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.3 succeeded
	I0615 10:32:26.658049    5622 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.27.3 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.3 succeeded
	I0615 10:32:26.658050    5622 cache.go:115] /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 exists
	I0615 10:32:26.658055    5622 cache.go:107] acquiring lock: {Name:mk3cbac49c2f017a1e53f647a30c359555de62f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:32:26.658057    5622 cache.go:96] cache image "registry.k8s.io/etcd:3.5.7-0" -> "/Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0" took 66.458µs
	I0615 10:32:26.658094    5622 cache.go:115] /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3 exists
	I0615 10:32:26.658103    5622 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.27.3" -> "/Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3" took 164.917µs
	I0615 10:32:26.658082    5622 cache.go:115] /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3 exists
	I0615 10:32:26.658111    5622 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.27.3 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.3 succeeded
	I0615 10:32:26.658098    5622 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.7-0 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0 succeeded
	I0615 10:32:26.658150    5622 cache.go:115] /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0615 10:32:26.658148    5622 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.27.3" -> "/Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3" took 141.334µs
	I0615 10:32:26.658177    5622 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.27.3 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.3 succeeded
	I0615 10:32:26.658156    5622 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 178.375µs
	I0615 10:32:26.658184    5622 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0615 10:32:26.658158    5622 cache.go:115] /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0615 10:32:26.658168    5622 cache.go:115] /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0615 10:32:26.658193    5622 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 138.667µs
	I0615 10:32:26.658198    5622 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0615 10:32:26.658198    5622 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 256.625µs
	I0615 10:32:26.658202    5622 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0615 10:32:26.658207    5622 cache.go:87] Successfully saved all images to host disk.
	I0615 10:32:26.658371    5622 start.go:365] acquiring machines lock for no-preload-084000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:32:26.658398    5622 start.go:369] acquired machines lock for "no-preload-084000" in 21.583µs
	I0615 10:32:26.658407    5622 start.go:96] Skipping create...Using existing machine configuration
	I0615 10:32:26.658412    5622 fix.go:54] fixHost starting: 
	I0615 10:32:26.658524    5622 fix.go:102] recreateIfNeeded on no-preload-084000: state=Stopped err=<nil>
	W0615 10:32:26.658532    5622 fix.go:128] unexpected machine state, will restart: <nil>
	I0615 10:32:26.666802    5622 out.go:177] * Restarting existing qemu2 VM for "no-preload-084000" ...
	I0615 10:32:26.670868    5622 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/no-preload-084000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/no-preload-084000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/no-preload-084000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:14:e8:b8:55:93 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/no-preload-084000/disk.qcow2
	I0615 10:32:26.672767    5622 main.go:141] libmachine: STDOUT: 
	I0615 10:32:26.672782    5622 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:32:26.672819    5622 fix.go:56] fixHost completed within 14.407542ms
	I0615 10:32:26.672824    5622 start.go:83] releasing machines lock for "no-preload-084000", held for 14.422125ms
	W0615 10:32:26.672831    5622 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:32:26.672866    5622 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:32:26.672871    5622 start.go:687] Will try again in 5 seconds ...
	I0615 10:32:31.674855    5622 start.go:365] acquiring machines lock for no-preload-084000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:32:31.674946    5622 start.go:369] acquired machines lock for "no-preload-084000" in 67.708µs
	I0615 10:32:31.674986    5622 start.go:96] Skipping create...Using existing machine configuration
	I0615 10:32:31.674990    5622 fix.go:54] fixHost starting: 
	I0615 10:32:31.675122    5622 fix.go:102] recreateIfNeeded on no-preload-084000: state=Stopped err=<nil>
	W0615 10:32:31.675127    5622 fix.go:128] unexpected machine state, will restart: <nil>
	I0615 10:32:31.678972    5622 out.go:177] * Restarting existing qemu2 VM for "no-preload-084000" ...
	I0615 10:32:31.687058    5622 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/no-preload-084000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/no-preload-084000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/no-preload-084000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:14:e8:b8:55:93 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/no-preload-084000/disk.qcow2
	I0615 10:32:31.688995    5622 main.go:141] libmachine: STDOUT: 
	I0615 10:32:31.689008    5622 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:32:31.689029    5622 fix.go:56] fixHost completed within 14.038541ms
	I0615 10:32:31.689034    5622 start.go:83] releasing machines lock for "no-preload-084000", held for 14.080833ms
	W0615 10:32:31.689080    5622 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-084000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-084000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:32:31.696024    5622 out.go:177] 
	W0615 10:32:31.699012    5622 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:32:31.699022    5622 out.go:239] * 
	* 
	W0615 10:32:31.699526    5622 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 10:32:31.712997    5622 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-084000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-084000 -n no-preload-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-084000 -n no-preload-084000: exit status 7 (29.113167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-418000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-418000 -n embed-certs-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-418000 -n embed-certs-418000: exit status 7 (30.61775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-418000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-418000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-418000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.7585ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-418000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-418000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-418000 -n embed-certs-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-418000 -n embed-certs-418000: exit status 7 (28.431625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p embed-certs-418000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p embed-certs-418000 "sudo crictl images -o json": exit status 89 (37.437958ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-418000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p embed-certs-418000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p embed-certs-418000"
start_stop_delete_test.go:304: v1.27.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.3",
- 	"registry.k8s.io/kube-controller-manager:v1.27.3",
- 	"registry.k8s.io/kube-proxy:v1.27.3",
- 	"registry.k8s.io/kube-scheduler:v1.27.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-418000 -n embed-certs-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-418000 -n embed-certs-418000: exit status 7 (27.438292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-418000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-418000 --alsologtostderr -v=1: exit status 89 (39.24575ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-418000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:32:31.435680    5640 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:32:31.435823    5640 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:32:31.435825    5640 out.go:309] Setting ErrFile to fd 2...
	I0615 10:32:31.435828    5640 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:32:31.435897    5640 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:32:31.436106    5640 out.go:303] Setting JSON to false
	I0615 10:32:31.436114    5640 mustload.go:65] Loading cluster: embed-certs-418000
	I0615 10:32:31.436291    5640 config.go:182] Loaded profile config "embed-certs-418000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:32:31.440008    5640 out.go:177] * The control plane node must be running for this command
	I0615 10:32:31.444084    5640 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-418000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-418000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-418000 -n embed-certs-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-418000 -n embed-certs-418000: exit status 7 (27.753709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-418000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-418000 -n embed-certs-418000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-418000 -n embed-certs-418000: exit status 7 (27.775708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-418000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-084000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-084000 -n no-preload-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-084000 -n no-preload-084000: exit status 7 (29.505583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-084000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-084000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-084000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.691542ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-084000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-084000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-084000 -n no-preload-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-084000 -n no-preload-084000: exit status 7 (29.631875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p no-preload-084000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p no-preload-084000 "sudo crictl images -o json": exit status 89 (45.364375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-084000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p no-preload-084000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p no-preload-084000"
start_stop_delete_test.go:304: v1.27.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.3",
- 	"registry.k8s.io/kube-controller-manager:v1.27.3",
- 	"registry.k8s.io/kube-proxy:v1.27.3",
- 	"registry.k8s.io/kube-scheduler:v1.27.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-084000 -n no-preload-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-084000 -n no-preload-084000: exit status 7 (29.979708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-832000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-832000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (10.017339375s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-832000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node default-k8s-diff-port-832000 in cluster default-k8s-diff-port-832000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-832000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:32:31.923313    5674 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:32:31.923430    5674 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:32:31.923433    5674 out.go:309] Setting ErrFile to fd 2...
	I0615 10:32:31.923436    5674 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:32:31.923500    5674 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:32:31.924820    5674 out.go:303] Setting JSON to false
	I0615 10:32:31.943050    5674 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3722,"bootTime":1686846629,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:32:31.943118    5674 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:32:31.954034    5674 out.go:177] * [default-k8s-diff-port-832000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:32:31.962085    5674 notify.go:220] Checking for updates...
	I0615 10:32:31.965965    5674 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:32:31.969944    5674 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:32:31.973031    5674 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:32:31.975920    5674 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:32:31.978992    5674 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:32:31.982033    5674 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:32:31.985232    5674 config.go:182] Loaded profile config "multinode-506000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:32:31.985294    5674 config.go:182] Loaded profile config "no-preload-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:32:31.985328    5674 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:32:31.990994    5674 out.go:177] * Using the qemu2 driver based on user configuration
	I0615 10:32:31.997004    5674 start.go:297] selected driver: qemu2
	I0615 10:32:31.997010    5674 start.go:884] validating driver "qemu2" against <nil>
	I0615 10:32:31.997021    5674 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:32:31.998955    5674 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0615 10:32:32.002967    5674 out.go:177] * Automatically selected the socket_vmnet network
	I0615 10:32:32.006152    5674 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 10:32:32.006175    5674 cni.go:84] Creating CNI manager for ""
	I0615 10:32:32.006181    5674 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 10:32:32.006185    5674 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0615 10:32:32.006192    5674 start_flags.go:319] config:
	{Name:default-k8s-diff-port-832000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-832000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stat
icIP:}
	I0615 10:32:32.006287    5674 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:32:32.013049    5674 out.go:177] * Starting control plane node default-k8s-diff-port-832000 in cluster default-k8s-diff-port-832000
	I0615 10:32:32.017039    5674 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 10:32:32.017066    5674 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 10:32:32.017077    5674 cache.go:57] Caching tarball of preloaded images
	I0615 10:32:32.017135    5674 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 10:32:32.017140    5674 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 10:32:32.017218    5674 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/default-k8s-diff-port-832000/config.json ...
	I0615 10:32:32.017234    5674 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/default-k8s-diff-port-832000/config.json: {Name:mk3cbb69210a16f6bb46ab5d6d8e9c8eb46687c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:32:32.017441    5674 start.go:365] acquiring machines lock for default-k8s-diff-port-832000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:32:32.017467    5674 start.go:369] acquired machines lock for "default-k8s-diff-port-832000" in 17µs
	I0615 10:32:32.017476    5674 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-832000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-832000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:32:32.017513    5674 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:32:32.021126    5674 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0615 10:32:32.035332    5674 start.go:159] libmachine.API.Create for "default-k8s-diff-port-832000" (driver="qemu2")
	I0615 10:32:32.035364    5674 client.go:168] LocalClient.Create starting
	I0615 10:32:32.035428    5674 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:32:32.035449    5674 main.go:141] libmachine: Decoding PEM data...
	I0615 10:32:32.035457    5674 main.go:141] libmachine: Parsing certificate...
	I0615 10:32:32.035504    5674 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:32:32.035518    5674 main.go:141] libmachine: Decoding PEM data...
	I0615 10:32:32.035527    5674 main.go:141] libmachine: Parsing certificate...
	I0615 10:32:32.035859    5674 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:32:32.266101    5674 main.go:141] libmachine: Creating SSH key...
	I0615 10:32:32.446580    5674 main.go:141] libmachine: Creating Disk image...
	I0615 10:32:32.446591    5674 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:32:32.446744    5674 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/default-k8s-diff-port-832000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/default-k8s-diff-port-832000/disk.qcow2
	I0615 10:32:32.455826    5674 main.go:141] libmachine: STDOUT: 
	I0615 10:32:32.455846    5674 main.go:141] libmachine: STDERR: 
	I0615 10:32:32.455917    5674 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/default-k8s-diff-port-832000/disk.qcow2 +20000M
	I0615 10:32:32.463900    5674 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:32:32.463919    5674 main.go:141] libmachine: STDERR: 
	I0615 10:32:32.463940    5674 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/default-k8s-diff-port-832000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/default-k8s-diff-port-832000/disk.qcow2
	I0615 10:32:32.463948    5674 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:32:32.464000    5674 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/default-k8s-diff-port-832000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/default-k8s-diff-port-832000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/default-k8s-diff-port-832000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:2f:89:50:d3:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/default-k8s-diff-port-832000/disk.qcow2
	I0615 10:32:32.465666    5674 main.go:141] libmachine: STDOUT: 
	I0615 10:32:32.465679    5674 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:32:32.465710    5674 client.go:171] LocalClient.Create took 430.347916ms
	I0615 10:32:34.468006    5674 start.go:128] duration metric: createHost completed in 2.450490458s
	I0615 10:32:34.468082    5674 start.go:83] releasing machines lock for "default-k8s-diff-port-832000", held for 2.450642375s
	W0615 10:32:34.468205    5674 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:32:34.475281    5674 out.go:177] * Deleting "default-k8s-diff-port-832000" in qemu2 ...
	W0615 10:32:34.498459    5674 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:32:34.498498    5674 start.go:687] Will try again in 5 seconds ...
	I0615 10:32:39.500721    5674 start.go:365] acquiring machines lock for default-k8s-diff-port-832000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:32:39.501252    5674 start.go:369] acquired machines lock for "default-k8s-diff-port-832000" in 424.917µs
	I0615 10:32:39.501387    5674 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-832000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-832000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:32:39.501728    5674 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:32:39.507362    5674 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0615 10:32:39.554230    5674 start.go:159] libmachine.API.Create for "default-k8s-diff-port-832000" (driver="qemu2")
	I0615 10:32:39.554276    5674 client.go:168] LocalClient.Create starting
	I0615 10:32:39.554411    5674 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:32:39.554457    5674 main.go:141] libmachine: Decoding PEM data...
	I0615 10:32:39.554479    5674 main.go:141] libmachine: Parsing certificate...
	I0615 10:32:39.554548    5674 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:32:39.554577    5674 main.go:141] libmachine: Decoding PEM data...
	I0615 10:32:39.554589    5674 main.go:141] libmachine: Parsing certificate...
	I0615 10:32:39.555102    5674 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:32:39.735019    5674 main.go:141] libmachine: Creating SSH key...
	I0615 10:32:39.844123    5674 main.go:141] libmachine: Creating Disk image...
	I0615 10:32:39.844132    5674 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:32:39.844337    5674 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/default-k8s-diff-port-832000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/default-k8s-diff-port-832000/disk.qcow2
	I0615 10:32:39.856103    5674 main.go:141] libmachine: STDOUT: 
	I0615 10:32:39.856120    5674 main.go:141] libmachine: STDERR: 
	I0615 10:32:39.856192    5674 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/default-k8s-diff-port-832000/disk.qcow2 +20000M
	I0615 10:32:39.865162    5674 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:32:39.865183    5674 main.go:141] libmachine: STDERR: 
	I0615 10:32:39.865198    5674 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/default-k8s-diff-port-832000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/default-k8s-diff-port-832000/disk.qcow2
	I0615 10:32:39.865214    5674 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:32:39.865252    5674 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/default-k8s-diff-port-832000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/default-k8s-diff-port-832000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/default-k8s-diff-port-832000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:33:fd:04:2b:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/default-k8s-diff-port-832000/disk.qcow2
	I0615 10:32:39.866989    5674 main.go:141] libmachine: STDOUT: 
	I0615 10:32:39.867005    5674 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:32:39.867017    5674 client.go:171] LocalClient.Create took 312.739375ms
	I0615 10:32:41.869241    5674 start.go:128] duration metric: createHost completed in 2.367526459s
	I0615 10:32:41.869302    5674 start.go:83] releasing machines lock for "default-k8s-diff-port-832000", held for 2.368065416s
	W0615 10:32:41.869742    5674 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-832000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-832000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:32:41.878468    5674 out.go:177] 
	W0615 10:32:41.882506    5674 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:32:41.882528    5674 out.go:239] * 
	* 
	W0615 10:32:41.885059    5674 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 10:32:41.900312    5674 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-832000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000: exit status 7 (61.370917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-084000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-084000 --alsologtostderr -v=1: exit status 89 (47.654542ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-084000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:32:31.938813    5676 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:32:31.938971    5676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:32:31.938978    5676 out.go:309] Setting ErrFile to fd 2...
	I0615 10:32:31.938980    5676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:32:31.939047    5676 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:32:31.939258    5676 out.go:303] Setting JSON to false
	I0615 10:32:31.939267    5676 mustload.go:65] Loading cluster: no-preload-084000
	I0615 10:32:31.939439    5676 config.go:182] Loaded profile config "no-preload-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:32:31.944045    5676 out.go:177] * The control plane node must be running for this command
	I0615 10:32:31.951006    5676 out.go:177]   To start a cluster, run: "minikube start -p no-preload-084000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-084000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-084000 -n no-preload-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-084000 -n no-preload-084000: exit status 7 (32.371667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-084000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-084000 -n no-preload-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-084000 -n no-preload-084000: exit status 7 (30.037583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-084000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (11.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-772000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-772000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (11.733090917s)

                                                
                                                
-- stdout --
	* [newest-cni-772000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node newest-cni-772000 in cluster newest-cni-772000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-772000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:32:32.555651    5707 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:32:32.555769    5707 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:32:32.555772    5707 out.go:309] Setting ErrFile to fd 2...
	I0615 10:32:32.555774    5707 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:32:32.555842    5707 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:32:32.556794    5707 out.go:303] Setting JSON to false
	I0615 10:32:32.572032    5707 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3723,"bootTime":1686846629,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:32:32.572107    5707 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:32:32.576994    5707 out.go:177] * [newest-cni-772000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:32:32.584064    5707 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:32:32.587006    5707 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:32:32.584136    5707 notify.go:220] Checking for updates...
	I0615 10:32:32.593007    5707 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:32:32.595941    5707 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:32:32.599038    5707 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:32:32.602036    5707 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:32:32.605393    5707 config.go:182] Loaded profile config "default-k8s-diff-port-832000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:32:32.605466    5707 config.go:182] Loaded profile config "multinode-506000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:32:32.605508    5707 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:32:32.609971    5707 out.go:177] * Using the qemu2 driver based on user configuration
	I0615 10:32:32.616988    5707 start.go:297] selected driver: qemu2
	I0615 10:32:32.616993    5707 start.go:884] validating driver "qemu2" against <nil>
	I0615 10:32:32.617000    5707 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:32:32.618881    5707 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	W0615 10:32:32.618902    5707 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0615 10:32:32.626996    5707 out.go:177] * Automatically selected the socket_vmnet network
	I0615 10:32:32.635071    5707 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0615 10:32:32.635088    5707 cni.go:84] Creating CNI manager for ""
	I0615 10:32:32.635094    5707 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 10:32:32.635099    5707 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0615 10:32:32.635106    5707 start_flags.go:319] config:
	{Name:newest-cni-772000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-772000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/s
ocket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:32:32.635203    5707 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:32:32.642884    5707 out.go:177] * Starting control plane node newest-cni-772000 in cluster newest-cni-772000
	I0615 10:32:32.646986    5707 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 10:32:32.647014    5707 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 10:32:32.647027    5707 cache.go:57] Caching tarball of preloaded images
	I0615 10:32:32.647094    5707 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 10:32:32.647104    5707 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 10:32:32.647165    5707 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/newest-cni-772000/config.json ...
	I0615 10:32:32.647180    5707 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/newest-cni-772000/config.json: {Name:mk0d430b8b46c6fe56b109643f20eb54023c2b9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 10:32:32.647391    5707 start.go:365] acquiring machines lock for newest-cni-772000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:32:34.468283    5707 start.go:369] acquired machines lock for "newest-cni-772000" in 1.820887333s
	I0615 10:32:34.468396    5707 start.go:93] Provisioning new machine with config: &{Name:newest-cni-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-772000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:32:34.469438    5707 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:32:34.475304    5707 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0615 10:32:34.520722    5707 start.go:159] libmachine.API.Create for "newest-cni-772000" (driver="qemu2")
	I0615 10:32:34.520761    5707 client.go:168] LocalClient.Create starting
	I0615 10:32:34.520913    5707 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:32:34.520955    5707 main.go:141] libmachine: Decoding PEM data...
	I0615 10:32:34.520980    5707 main.go:141] libmachine: Parsing certificate...
	I0615 10:32:34.521068    5707 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:32:34.521096    5707 main.go:141] libmachine: Decoding PEM data...
	I0615 10:32:34.521119    5707 main.go:141] libmachine: Parsing certificate...
	I0615 10:32:34.521798    5707 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:32:34.639461    5707 main.go:141] libmachine: Creating SSH key...
	I0615 10:32:34.829905    5707 main.go:141] libmachine: Creating Disk image...
	I0615 10:32:34.829913    5707 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:32:34.830070    5707 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/newest-cni-772000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/newest-cni-772000/disk.qcow2
	I0615 10:32:34.840538    5707 main.go:141] libmachine: STDOUT: 
	I0615 10:32:34.840556    5707 main.go:141] libmachine: STDERR: 
	I0615 10:32:34.840614    5707 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/newest-cni-772000/disk.qcow2 +20000M
	I0615 10:32:34.847822    5707 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:32:34.847835    5707 main.go:141] libmachine: STDERR: 
	I0615 10:32:34.847852    5707 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/newest-cni-772000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/newest-cni-772000/disk.qcow2
	I0615 10:32:34.847859    5707 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:32:34.847893    5707 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/newest-cni-772000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/newest-cni-772000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/newest-cni-772000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:c8:ff:4c:10:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/newest-cni-772000/disk.qcow2
	I0615 10:32:34.849369    5707 main.go:141] libmachine: STDOUT: 
	I0615 10:32:34.849380    5707 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:32:34.849404    5707 client.go:171] LocalClient.Create took 328.643291ms
	I0615 10:32:36.851666    5707 start.go:128] duration metric: createHost completed in 2.382217458s
	I0615 10:32:36.851732    5707 start.go:83] releasing machines lock for "newest-cni-772000", held for 2.383443667s
	W0615 10:32:36.851781    5707 start.go:672] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:32:36.862988    5707 out.go:177] * Deleting "newest-cni-772000" in qemu2 ...
	W0615 10:32:36.880940    5707 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:32:36.880967    5707 start.go:687] Will try again in 5 seconds ...
	I0615 10:32:41.883061    5707 start.go:365] acquiring machines lock for newest-cni-772000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:32:41.883427    5707 start.go:369] acquired machines lock for "newest-cni-772000" in 288.584µs
	I0615 10:32:41.883573    5707 start.go:93] Provisioning new machine with config: &{Name:newest-cni-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-772000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0615 10:32:41.883833    5707 start.go:125] createHost starting for "" (driver="qemu2")
	I0615 10:32:41.900310    5707 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0615 10:32:41.947005    5707 start.go:159] libmachine.API.Create for "newest-cni-772000" (driver="qemu2")
	I0615 10:32:41.947060    5707 client.go:168] LocalClient.Create starting
	I0615 10:32:41.947157    5707 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/ca.pem
	I0615 10:32:41.947217    5707 main.go:141] libmachine: Decoding PEM data...
	I0615 10:32:41.947246    5707 main.go:141] libmachine: Parsing certificate...
	I0615 10:32:41.947319    5707 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/16718-868/.minikube/certs/cert.pem
	I0615 10:32:41.947351    5707 main.go:141] libmachine: Decoding PEM data...
	I0615 10:32:41.947366    5707 main.go:141] libmachine: Parsing certificate...
	I0615 10:32:41.947905    5707 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/16718-868/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso...
	I0615 10:32:42.075783    5707 main.go:141] libmachine: Creating SSH key...
	I0615 10:32:42.190989    5707 main.go:141] libmachine: Creating Disk image...
	I0615 10:32:42.190997    5707 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0615 10:32:42.194447    5707 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/16718-868/.minikube/machines/newest-cni-772000/disk.qcow2.raw /Users/jenkins/minikube-integration/16718-868/.minikube/machines/newest-cni-772000/disk.qcow2
	I0615 10:32:42.205838    5707 main.go:141] libmachine: STDOUT: 
	I0615 10:32:42.205858    5707 main.go:141] libmachine: STDERR: 
	I0615 10:32:42.205915    5707 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/newest-cni-772000/disk.qcow2 +20000M
	I0615 10:32:42.218121    5707 main.go:141] libmachine: STDOUT: Image resized.
	
	I0615 10:32:42.218138    5707 main.go:141] libmachine: STDERR: 
	I0615 10:32:42.218154    5707 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/16718-868/.minikube/machines/newest-cni-772000/disk.qcow2.raw and /Users/jenkins/minikube-integration/16718-868/.minikube/machines/newest-cni-772000/disk.qcow2
	I0615 10:32:42.218161    5707 main.go:141] libmachine: Starting QEMU VM...
	I0615 10:32:42.218200    5707 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/newest-cni-772000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/newest-cni-772000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/newest-cni-772000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:0c:b1:a5:31:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/newest-cni-772000/disk.qcow2
	I0615 10:32:42.220057    5707 main.go:141] libmachine: STDOUT: 
	I0615 10:32:42.220069    5707 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:32:42.220082    5707 client.go:171] LocalClient.Create took 273.019417ms
	I0615 10:32:44.222334    5707 start.go:128] duration metric: createHost completed in 2.338480625s
	I0615 10:32:44.222411    5707 start.go:83] releasing machines lock for "newest-cni-772000", held for 2.339s
	W0615 10:32:44.222733    5707 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-772000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-772000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:32:44.234163    5707 out.go:177] 
	W0615 10:32:44.238355    5707 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:32:44.238387    5707 out.go:239] * 
	* 
	W0615 10:32:44.241124    5707 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 10:32:44.249107    5707 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-772000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-772000 -n newest-cni-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-772000 -n newest-cni-772000: exit status 7 (61.728084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-772000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (11.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-832000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-832000 create -f testdata/busybox.yaml: exit status 1 (31.556875ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-832000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-832000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000: exit status 7 (33.177167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-832000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000: exit status 7 (33.481ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-832000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-832000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-832000 describe deploy/metrics-server -n kube-system: exit status 1 (26.210458ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-832000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-832000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000: exit status 7 (27.872667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-832000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-832000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (6.886423166s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-832000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-832000 in cluster default-k8s-diff-port-832000
	* Restarting existing qemu2 VM for "default-k8s-diff-port-832000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-832000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:32:42.447113    5742 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:32:42.447208    5742 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:32:42.447212    5742 out.go:309] Setting ErrFile to fd 2...
	I0615 10:32:42.447214    5742 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:32:42.447276    5742 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:32:42.448218    5742 out.go:303] Setting JSON to false
	I0615 10:32:42.463484    5742 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3733,"bootTime":1686846629,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:32:42.463564    5742 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:32:42.468367    5742 out.go:177] * [default-k8s-diff-port-832000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:32:42.475401    5742 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:32:42.475488    5742 notify.go:220] Checking for updates...
	I0615 10:32:42.481318    5742 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:32:42.484431    5742 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:32:42.487304    5742 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:32:42.490364    5742 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:32:42.493378    5742 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:32:42.496681    5742 config.go:182] Loaded profile config "default-k8s-diff-port-832000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:32:42.496926    5742 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:32:42.501327    5742 out.go:177] * Using the qemu2 driver based on existing profile
	I0615 10:32:42.508319    5742 start.go:297] selected driver: qemu2
	I0615 10:32:42.508326    5742 start.go:884] validating driver "qemu2" against &{Name:default-k8s-diff-port-832000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-832000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subne
t: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:32:42.508411    5742 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:32:42.510301    5742 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0615 10:32:42.510329    5742 cni.go:84] Creating CNI manager for ""
	I0615 10:32:42.510335    5742 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 10:32:42.510341    5742 start_flags.go:319] config:
	{Name:default-k8s-diff-port-832000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:default-k8s-diff-port-8320
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:32:42.510426    5742 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:32:42.517475    5742 out.go:177] * Starting control plane node default-k8s-diff-port-832000 in cluster default-k8s-diff-port-832000
	I0615 10:32:42.521323    5742 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 10:32:42.521346    5742 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 10:32:42.521360    5742 cache.go:57] Caching tarball of preloaded images
	I0615 10:32:42.521417    5742 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 10:32:42.521422    5742 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 10:32:42.521485    5742 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/default-k8s-diff-port-832000/config.json ...
	I0615 10:32:42.521853    5742 start.go:365] acquiring machines lock for default-k8s-diff-port-832000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:32:44.222559    5742 start.go:369] acquired machines lock for "default-k8s-diff-port-832000" in 1.700669541s
	I0615 10:32:44.222773    5742 start.go:96] Skipping create...Using existing machine configuration
	I0615 10:32:44.222803    5742 fix.go:54] fixHost starting: 
	I0615 10:32:44.223532    5742 fix.go:102] recreateIfNeeded on default-k8s-diff-port-832000: state=Stopped err=<nil>
	W0615 10:32:44.223574    5742 fix.go:128] unexpected machine state, will restart: <nil>
	I0615 10:32:44.234162    5742 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-832000" ...
	I0615 10:32:44.239801    5742 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/default-k8s-diff-port-832000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/default-k8s-diff-port-832000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/default-k8s-diff-port-832000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:33:fd:04:2b:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/default-k8s-diff-port-832000/disk.qcow2
	I0615 10:32:44.249011    5742 main.go:141] libmachine: STDOUT: 
	I0615 10:32:44.249080    5742 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:32:44.249184    5742 fix.go:56] fixHost completed within 26.391917ms
	I0615 10:32:44.249209    5742 start.go:83] releasing machines lock for "default-k8s-diff-port-832000", held for 26.612458ms
	W0615 10:32:44.249238    5742 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:32:44.249453    5742 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:32:44.249468    5742 start.go:687] Will try again in 5 seconds ...
	I0615 10:32:49.251639    5742 start.go:365] acquiring machines lock for default-k8s-diff-port-832000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:32:49.252248    5742 start.go:369] acquired machines lock for "default-k8s-diff-port-832000" in 470.666µs
	I0615 10:32:49.252439    5742 start.go:96] Skipping create...Using existing machine configuration
	I0615 10:32:49.252485    5742 fix.go:54] fixHost starting: 
	I0615 10:32:49.253340    5742 fix.go:102] recreateIfNeeded on default-k8s-diff-port-832000: state=Stopped err=<nil>
	W0615 10:32:49.253366    5742 fix.go:128] unexpected machine state, will restart: <nil>
	I0615 10:32:49.258299    5742 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-832000" ...
	I0615 10:32:49.264388    5742 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/default-k8s-diff-port-832000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/default-k8s-diff-port-832000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/default-k8s-diff-port-832000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:33:fd:04:2b:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/default-k8s-diff-port-832000/disk.qcow2
	I0615 10:32:49.273463    5742 main.go:141] libmachine: STDOUT: 
	I0615 10:32:49.273524    5742 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:32:49.273629    5742 fix.go:56] fixHost completed within 21.170167ms
	I0615 10:32:49.273653    5742 start.go:83] releasing machines lock for "default-k8s-diff-port-832000", held for 21.361917ms
	W0615 10:32:49.273894    5742 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-832000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-832000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:32:49.281095    5742 out.go:177] 
	W0615 10:32:49.285342    5742 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:32:49.285365    5742 out.go:239] * 
	* 
	W0615 10:32:49.287756    5742 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 10:32:49.295196    5742 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-832000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000: exit status 7 (68.731ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-772000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-772000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.3: exit status 80 (5.158345084s)

                                                
                                                
-- stdout --
	* [newest-cni-772000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node newest-cni-772000 in cluster newest-cni-772000
	* Restarting existing qemu2 VM for "newest-cni-772000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-772000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:32:44.560020    5758 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:32:44.560144    5758 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:32:44.560147    5758 out.go:309] Setting ErrFile to fd 2...
	I0615 10:32:44.560149    5758 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:32:44.560212    5758 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:32:44.561227    5758 out.go:303] Setting JSON to false
	I0615 10:32:44.576332    5758 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3735,"bootTime":1686846629,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:32:44.576406    5758 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:32:44.584488    5758 out.go:177] * [newest-cni-772000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:32:44.588471    5758 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:32:44.588533    5758 notify.go:220] Checking for updates...
	I0615 10:32:44.595432    5758 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:32:44.598507    5758 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:32:44.601450    5758 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:32:44.604491    5758 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:32:44.607473    5758 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:32:44.608960    5758 config.go:182] Loaded profile config "newest-cni-772000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:32:44.609211    5758 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:32:44.613449    5758 out.go:177] * Using the qemu2 driver based on existing profile
	I0615 10:32:44.620304    5758 start.go:297] selected driver: qemu2
	I0615 10:32:44.620308    5758 start.go:884] validating driver "qemu2" against &{Name:newest-cni-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-772000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<n
il> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:32:44.620366    5758 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:32:44.622260    5758 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0615 10:32:44.622284    5758 cni.go:84] Creating CNI manager for ""
	I0615 10:32:44.622290    5758 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 10:32:44.622297    5758 start_flags.go:319] config:
	{Name:newest-cni-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:newest-cni-772000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeReques
ted:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:32:44.622369    5758 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 10:32:44.629440    5758 out.go:177] * Starting control plane node newest-cni-772000 in cluster newest-cni-772000
	I0615 10:32:44.633477    5758 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 10:32:44.633494    5758 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 10:32:44.633510    5758 cache.go:57] Caching tarball of preloaded images
	I0615 10:32:44.633557    5758 preload.go:174] Found /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0615 10:32:44.633563    5758 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 10:32:44.633625    5758 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/newest-cni-772000/config.json ...
	I0615 10:32:44.633981    5758 start.go:365] acquiring machines lock for newest-cni-772000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:32:44.634010    5758 start.go:369] acquired machines lock for "newest-cni-772000" in 23.209µs
	I0615 10:32:44.634019    5758 start.go:96] Skipping create...Using existing machine configuration
	I0615 10:32:44.634024    5758 fix.go:54] fixHost starting: 
	I0615 10:32:44.634144    5758 fix.go:102] recreateIfNeeded on newest-cni-772000: state=Stopped err=<nil>
	W0615 10:32:44.634153    5758 fix.go:128] unexpected machine state, will restart: <nil>
	I0615 10:32:44.641375    5758 out.go:177] * Restarting existing qemu2 VM for "newest-cni-772000" ...
	I0615 10:32:44.645490    5758 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/newest-cni-772000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/newest-cni-772000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/newest-cni-772000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:0c:b1:a5:31:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/newest-cni-772000/disk.qcow2
	I0615 10:32:44.647382    5758 main.go:141] libmachine: STDOUT: 
	I0615 10:32:44.647400    5758 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:32:44.647425    5758 fix.go:56] fixHost completed within 13.402875ms
	I0615 10:32:44.647430    5758 start.go:83] releasing machines lock for "newest-cni-772000", held for 13.416791ms
	W0615 10:32:44.647437    5758 start.go:672] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:32:44.647475    5758 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:32:44.647488    5758 start.go:687] Will try again in 5 seconds ...
	I0615 10:32:49.648234    5758 start.go:365] acquiring machines lock for newest-cni-772000: {Name:mk0e9b60c886194f1f41d95b28b5b2644eaf9432 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0615 10:32:49.648363    5758 start.go:369] acquired machines lock for "newest-cni-772000" in 73.708µs
	I0615 10:32:49.648382    5758 start.go:96] Skipping create...Using existing machine configuration
	I0615 10:32:49.648385    5758 fix.go:54] fixHost starting: 
	I0615 10:32:49.648514    5758 fix.go:102] recreateIfNeeded on newest-cni-772000: state=Stopped err=<nil>
	W0615 10:32:49.648518    5758 fix.go:128] unexpected machine state, will restart: <nil>
	I0615 10:32:49.652658    5758 out.go:177] * Restarting existing qemu2 VM for "newest-cni-772000" ...
	I0615 10:32:49.659780    5758 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/16718-868/.minikube/machines/newest-cni-772000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/newest-cni-772000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/16718-868/.minikube/machines/newest-cni-772000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:0c:b1:a5:31:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/16718-868/.minikube/machines/newest-cni-772000/disk.qcow2
	I0615 10:32:49.661628    5758 main.go:141] libmachine: STDOUT: 
	I0615 10:32:49.661644    5758 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0615 10:32:49.661664    5758 fix.go:56] fixHost completed within 13.278625ms
	I0615 10:32:49.661669    5758 start.go:83] releasing machines lock for "newest-cni-772000", held for 13.302375ms
	W0615 10:32:49.661729    5758 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-772000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-772000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0615 10:32:49.668666    5758 out.go:177] 
	W0615 10:32:49.671722    5758 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0615 10:32:49.671729    5758 out.go:239] * 
	* 
	W0615 10:32:49.672246    5758 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 10:32:49.687636    5758 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-772000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-772000 -n newest-cni-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-772000 -n newest-cni-772000: exit status 7 (30.023ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-772000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-832000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000: exit status 7 (30.772459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-832000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-832000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-832000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.322916ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-832000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-832000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000: exit status 7 (27.954334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-832000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-832000 "sudo crictl images -o json": exit status 89 (40.467958ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-832000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-832000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p default-k8s-diff-port-832000"
start_stop_delete_test.go:304: v1.27.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.3",
- 	"registry.k8s.io/kube-controller-manager:v1.27.3",
- 	"registry.k8s.io/kube-proxy:v1.27.3",
- 	"registry.k8s.io/kube-scheduler:v1.27.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000: exit status 7 (28.133625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-832000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-832000 --alsologtostderr -v=1: exit status 89 (42.484209ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-832000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:32:49.559431    5776 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:32:49.559567    5776 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:32:49.559570    5776 out.go:309] Setting ErrFile to fd 2...
	I0615 10:32:49.559572    5776 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:32:49.559644    5776 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:32:49.559861    5776 out.go:303] Setting JSON to false
	I0615 10:32:49.559869    5776 mustload.go:65] Loading cluster: default-k8s-diff-port-832000
	I0615 10:32:49.560041    5776 config.go:182] Loaded profile config "default-k8s-diff-port-832000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:32:49.564678    5776 out.go:177] * The control plane node must be running for this command
	I0615 10:32:49.570702    5776 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-832000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-832000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000: exit status 7 (27.996708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-832000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000: exit status 7 (27.881916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-832000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p newest-cni-772000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p newest-cni-772000 "sudo crictl images -o json": exit status 89 (40.149458ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-772000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p newest-cni-772000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p newest-cni-772000"
start_stop_delete_test.go:304: v1.27.3 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.3",
- 	"registry.k8s.io/kube-controller-manager:v1.27.3",
- 	"registry.k8s.io/kube-proxy:v1.27.3",
- 	"registry.k8s.io/kube-scheduler:v1.27.3",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-772000 -n newest-cni-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-772000 -n newest-cni-772000: exit status 7 (27.5535ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-772000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-772000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-772000 --alsologtostderr -v=1: exit status 89 (45.944958ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-772000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:32:49.820355    5797 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:32:49.820492    5797 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:32:49.820495    5797 out.go:309] Setting ErrFile to fd 2...
	I0615 10:32:49.820498    5797 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:32:49.820585    5797 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:32:49.820837    5797 out.go:303] Setting JSON to false
	I0615 10:32:49.820890    5797 mustload.go:65] Loading cluster: newest-cni-772000
	I0615 10:32:49.821562    5797 config.go:182] Loaded profile config "newest-cni-772000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:32:49.824698    5797 out.go:177] * The control plane node must be running for this command
	I0615 10:32:49.832616    5797 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-772000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-772000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-772000 -n newest-cni-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-772000 -n newest-cni-772000: exit status 7 (29.668208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-772000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-772000 -n newest-cni-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-772000 -n newest-cni-772000: exit status 7 (28.383417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-772000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (138/254)

Order passed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.1
10 TestDownloadOnly/v1.27.3/json-events 21.14
11 TestDownloadOnly/v1.27.3/preload-exists 0
14 TestDownloadOnly/v1.27.3/kubectl 0
15 TestDownloadOnly/v1.27.3/LogsDuration 0.08
16 TestDownloadOnly/DeleteAll 0.24
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.23
19 TestBinaryMirror 0.33
22 TestAddons/Setup 403.33
31 TestAddons/parallel/Headlamp 12.34
41 TestHyperKitDriverInstallOrUpdate 8.8
44 TestErrorSpam/setup 31.61
45 TestErrorSpam/start 0.33
46 TestErrorSpam/status 0.25
47 TestErrorSpam/pause 0.63
48 TestErrorSpam/unpause 0.58
49 TestErrorSpam/stop 12.25
52 TestFunctional/serial/CopySyncFile 0
53 TestFunctional/serial/StartWithProxy 42.75
54 TestFunctional/serial/AuditLog 0
55 TestFunctional/serial/SoftStart 38.04
56 TestFunctional/serial/KubeContext 0.03
57 TestFunctional/serial/KubectlGetPods 0.06
60 TestFunctional/serial/CacheCmd/cache/add_remote 5.86
61 TestFunctional/serial/CacheCmd/cache/add_local 1.34
62 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
63 TestFunctional/serial/CacheCmd/cache/list 0.03
64 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
65 TestFunctional/serial/CacheCmd/cache/cache_reload 1.28
66 TestFunctional/serial/CacheCmd/cache/delete 0.07
67 TestFunctional/serial/MinikubeKubectlCmd 0.47
68 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.53
69 TestFunctional/serial/ExtraConfig 37.29
70 TestFunctional/serial/ComponentHealth 0.05
71 TestFunctional/serial/LogsCmd 0.64
72 TestFunctional/serial/LogsFileCmd 0.64
73 TestFunctional/serial/InvalidService 3.95
75 TestFunctional/parallel/ConfigCmd 0.21
76 TestFunctional/parallel/DashboardCmd 9.27
77 TestFunctional/parallel/DryRun 0.21
78 TestFunctional/parallel/InternationalLanguage 0.11
79 TestFunctional/parallel/StatusCmd 0.24
84 TestFunctional/parallel/AddonsCmd 0.12
85 TestFunctional/parallel/PersistentVolumeClaim 25.12
87 TestFunctional/parallel/SSHCmd 0.13
88 TestFunctional/parallel/CpCmd 0.29
90 TestFunctional/parallel/FileSync 0.07
91 TestFunctional/parallel/CertSync 0.44
95 TestFunctional/parallel/NodeLabels 0.04
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.12
99 TestFunctional/parallel/License 0.61
101 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.23
102 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
104 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.09
105 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
106 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
107 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
108 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
109 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
110 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
111 TestFunctional/parallel/ServiceCmd/DeployApp 6.09
112 TestFunctional/parallel/ServiceCmd/List 0.33
113 TestFunctional/parallel/ServiceCmd/JSONOutput 0.29
114 TestFunctional/parallel/ServiceCmd/HTTPS 0.14
115 TestFunctional/parallel/ServiceCmd/Format 0.11
116 TestFunctional/parallel/ServiceCmd/URL 0.11
117 TestFunctional/parallel/ProfileCmd/profile_not_create 0.18
118 TestFunctional/parallel/ProfileCmd/profile_list 0.15
119 TestFunctional/parallel/ProfileCmd/profile_json_output 0.15
120 TestFunctional/parallel/MountCmd/any-port 6.17
121 TestFunctional/parallel/MountCmd/specific-port 1.22
123 TestFunctional/parallel/Version/short 0.04
124 TestFunctional/parallel/Version/components 0.17
125 TestFunctional/parallel/ImageCommands/ImageListShort 0.1
126 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
129 TestFunctional/parallel/ImageCommands/ImageBuild 2.56
130 TestFunctional/parallel/ImageCommands/Setup 2.76
131 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.31
132 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.53
133 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.49
134 TestFunctional/parallel/DockerEnv/bash 0.36
135 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
136 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
137 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.57
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.16
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.62
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.63
142 TestFunctional/delete_addon-resizer_images 0.12
143 TestFunctional/delete_my-image_image 0.04
144 TestFunctional/delete_minikube_cached_images 0.04
148 TestImageBuild/serial/Setup 31.78
149 TestImageBuild/serial/NormalBuild 1.99
151 TestImageBuild/serial/BuildWithDockerIgnore 0.14
152 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.1
155 TestIngressAddonLegacy/StartLegacyK8sCluster 83.51
157 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 16.31
158 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.21
162 TestJSONOutput/start/Command 46
163 TestJSONOutput/start/Audit 0
165 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
168 TestJSONOutput/pause/Command 0.3
169 TestJSONOutput/pause/Audit 0
171 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/unpause/Command 0.24
175 TestJSONOutput/unpause/Audit 0
177 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/stop/Command 9.08
181 TestJSONOutput/stop/Audit 0
183 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
185 TestErrorJSONOutput 0.33
190 TestMainNoArgs 0.03
191 TestMinikubeProfile 61.97
247 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
251 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
252 TestNoKubernetes/serial/ProfileList 0.15
253 TestNoKubernetes/serial/Stop 0.06
255 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
269 TestStartStop/group/old-k8s-version/serial/Stop 0.06
270 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.09
284 TestStartStop/group/embed-certs/serial/Stop 0.06
285 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.09
289 TestStartStop/group/no-preload/serial/Stop 0.06
290 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.09
306 TestStartStop/group/default-k8s-diff-port/serial/Stop 0.06
307 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.09
309 TestStartStop/group/newest-cni/serial/DeployApp 0
310 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
311 TestStartStop/group/newest-cni/serial/Stop 0.06
312 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.09
318 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
319 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-066000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-066000: exit status 85 (97.192625ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-066000 | jenkins | v1.30.1 | 15 Jun 23 09:32 PDT |          |
	|         | -p download-only-066000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/15 09:32:25
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0615 09:32:25.716356    1315 out.go:296] Setting OutFile to fd 1 ...
	I0615 09:32:25.716486    1315 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 09:32:25.716489    1315 out.go:309] Setting ErrFile to fd 2...
	I0615 09:32:25.716491    1315 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 09:32:25.716560    1315 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	W0615 09:32:25.716617    1315 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/16718-868/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16718-868/.minikube/config/config.json: no such file or directory
	I0615 09:32:25.717699    1315 out.go:303] Setting JSON to true
	I0615 09:32:25.735952    1315 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":116,"bootTime":1686846629,"procs":391,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 09:32:25.736013    1315 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 09:32:25.741681    1315 out.go:97] [download-only-066000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 09:32:25.745660    1315 out.go:169] MINIKUBE_LOCATION=16718
	W0615 09:32:25.741806    1315 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball: no such file or directory
	I0615 09:32:25.741844    1315 notify.go:220] Checking for updates...
	I0615 09:32:25.753574    1315 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 09:32:25.756697    1315 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 09:32:25.758055    1315 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 09:32:25.760662    1315 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	W0615 09:32:25.766711    1315 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0615 09:32:25.766945    1315 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 09:32:25.771640    1315 out.go:97] Using the qemu2 driver based on user configuration
	I0615 09:32:25.771661    1315 start.go:297] selected driver: qemu2
	I0615 09:32:25.771665    1315 start.go:884] validating driver "qemu2" against <nil>
	I0615 09:32:25.771749    1315 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0615 09:32:25.775643    1315 out.go:169] Automatically selected the socket_vmnet network
	I0615 09:32:25.781061    1315 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0615 09:32:25.781148    1315 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0615 09:32:25.781195    1315 cni.go:84] Creating CNI manager for ""
	I0615 09:32:25.781201    1315 cni.go:168] CNI unnecessary in this configuration, recommending no CNI
	I0615 09:32:25.781205    1315 start_flags.go:319] config:
	{Name:download-only-066000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-066000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 09:32:25.781383    1315 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 09:32:25.785665    1315 out.go:97] Downloading VM boot image ...
	I0615 09:32:25.785696    1315 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/iso/arm64/minikube-v1.30.1-1686713055-15665-arm64.iso
	I0615 09:32:41.433833    1315 out.go:97] Starting control plane node download-only-066000 in cluster download-only-066000
	I0615 09:32:41.433854    1315 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0615 09:32:41.529630    1315 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0615 09:32:41.529716    1315 cache.go:57] Caching tarball of preloaded images
	I0615 09:32:41.530569    1315 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0615 09:32:41.535866    1315 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0615 09:32:41.535876    1315 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0615 09:32:41.742996    1315 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0615 09:32:53.435434    1315 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0615 09:32:53.435569    1315 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0615 09:32:54.079581    1315 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0615 09:32:54.079759    1315 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/download-only-066000/config.json ...
	I0615 09:32:54.079784    1315 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/download-only-066000/config.json: {Name:mkeab36ea4760a4354a26ce4f059985f1309a7db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0615 09:32:54.080013    1315 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0615 09:32:54.080189    1315 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0615 09:32:54.674458    1315 out.go:169] 
	W0615 09:32:54.679596    1315 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/16718-868/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x103eeffc8 0x103eeffc8 0x103eeffc8 0x103eeffc8 0x103eeffc8 0x103eeffc8 0x103eeffc8] Decompressors:map[bz2:0x140006d28d0 gz:0x140006d28d8 tar:0x140006d2880 tar.bz2:0x140006d2890 tar.gz:0x140006d28a0 tar.xz:0x140006d28b0 tar.zst:0x140006d28c0 tbz2:0x140006d2890 tgz:0x140006d28a0 txz:0x140006d28b0 tzst:0x140006d28c0 xz:0x140006d28e0 zip:0x140006d28f0 zst:0x140006d28e8] Getters:map[file:0x14000bfc740 http:0x14000a0ea00 https:0x14000a0ea50] Dir:false ProgressListener:
<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0615 09:32:54.679626    1315 out_reason.go:110] 
	W0615 09:32:54.686527    1315 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0615 09:32:54.690415    1315 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-066000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/json-events (21.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-066000 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-066000 --force --alsologtostderr --kubernetes-version=v1.27.3 --container-runtime=docker --driver=qemu2 : (21.13854975s)
--- PASS: TestDownloadOnly/v1.27.3/json-events (21.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/preload-exists
--- PASS: TestDownloadOnly/v1.27.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/kubectl
--- PASS: TestDownloadOnly/v1.27.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-066000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-066000: exit status 85 (75.8825ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-066000 | jenkins | v1.30.1 | 15 Jun 23 09:32 PDT |          |
	|         | -p download-only-066000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-066000 | jenkins | v1.30.1 | 15 Jun 23 09:32 PDT |          |
	|         | -p download-only-066000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/15 09:32:54
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0615 09:32:54.881406    1338 out.go:296] Setting OutFile to fd 1 ...
	I0615 09:32:54.881544    1338 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 09:32:54.881546    1338 out.go:309] Setting ErrFile to fd 2...
	I0615 09:32:54.881549    1338 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 09:32:54.881628    1338 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	W0615 09:32:54.881688    1338 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/16718-868/.minikube/config/config.json: open /Users/jenkins/minikube-integration/16718-868/.minikube/config/config.json: no such file or directory
	I0615 09:32:54.882645    1338 out.go:303] Setting JSON to true
	I0615 09:32:54.897593    1338 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":145,"bootTime":1686846629,"procs":375,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 09:32:54.897663    1338 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 09:32:54.901670    1338 out.go:97] [download-only-066000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 09:32:54.905459    1338 out.go:169] MINIKUBE_LOCATION=16718
	I0615 09:32:54.901775    1338 notify.go:220] Checking for updates...
	I0615 09:32:54.912319    1338 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 09:32:54.916456    1338 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 09:32:54.919487    1338 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 09:32:54.922472    1338 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	W0615 09:32:54.928427    1338 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0615 09:32:54.928721    1338 config.go:182] Loaded profile config "download-only-066000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0615 09:32:54.928766    1338 start.go:792] api.Load failed for download-only-066000: filestore "download-only-066000": Docker machine "download-only-066000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0615 09:32:54.928824    1338 driver.go:373] Setting default libvirt URI to qemu:///system
	W0615 09:32:54.928838    1338 start.go:792] api.Load failed for download-only-066000: filestore "download-only-066000": Docker machine "download-only-066000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0615 09:32:54.932320    1338 out.go:97] Using the qemu2 driver based on existing profile
	I0615 09:32:54.932334    1338 start.go:297] selected driver: qemu2
	I0615 09:32:54.932337    1338 start.go:884] validating driver "qemu2" against &{Name:download-only-066000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-066000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 09:32:54.934231    1338 cni.go:84] Creating CNI manager for ""
	I0615 09:32:54.934243    1338 cni.go:152] "qemu2" driver + "docker" runtime found, recommending bridge
	I0615 09:32:54.934257    1338 start_flags.go:319] config:
	{Name:download-only-066000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.3 ClusterName:download-only-066000 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 09:32:54.934343    1338 iso.go:125] acquiring lock: {Name:mka854e5c2270744d1af870c9844a40611877472 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0615 09:32:54.937392    1338 out.go:97] Starting control plane node download-only-066000 in cluster download-only-066000
	I0615 09:32:54.937400    1338 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 09:32:55.179743    1338 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 09:32:55.179828    1338 cache.go:57] Caching tarball of preloaded images
	I0615 09:32:55.180595    1338 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 09:32:55.185634    1338 out.go:97] Downloading Kubernetes v1.27.3 preload ...
	I0615 09:32:55.185661    1338 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 ...
	I0615 09:32:55.398377    1338 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.3/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4?checksum=md5:e061b1178966dc348ac19219444153f4 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4
	I0615 09:33:10.847744    1338 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 ...
	I0615 09:33:10.847886    1338 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/16718-868/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.3-docker-overlay2-arm64.tar.lz4 ...
	I0615 09:33:11.409231    1338 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.3 on docker
	I0615 09:33:11.409305    1338 profile.go:148] Saving config to /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/download-only-066000/config.json ...
	I0615 09:33:11.409571    1338 preload.go:132] Checking if preload exists for k8s version v1.27.3 and runtime docker
	I0615 09:33:11.409750    1338 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.27.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/16718-868/.minikube/cache/darwin/arm64/v1.27.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-066000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-066000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.33s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-062000 --alsologtostderr --binary-mirror http://127.0.0.1:49310 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-062000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-062000
--- PASS: TestBinaryMirror (0.33s)

                                                
                                    
x
+
TestAddons/Setup (403.33s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-477000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Done: out/minikube-darwin-arm64 start -p addons-477000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns: (6m43.332552666s)
--- PASS: TestAddons/Setup (403.33s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-477000 --alsologtostderr -v=1
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-6b5756787-s7gx7" [c8b4fd92-ec77-4410-a719-d72bf0a83b8a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-6b5756787-s7gx7" [c8b4fd92-ec77-4410-a719-d72bf0a83b8a] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.018408417s
--- PASS: TestAddons/parallel/Headlamp (12.34s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.8s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
E0615 10:26:01.250520    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/ingress-addon-legacy-422000/client.crt: no such file or directory
--- PASS: TestHyperKitDriverInstallOrUpdate (8.80s)

                                                
                                    
x
+
TestErrorSpam/setup (31.61s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-940000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-940000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 --driver=qemu2 : (31.611815708s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.3."
--- PASS: TestErrorSpam/setup (31.61s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.63s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 pause
--- PASS: TestErrorSpam/pause (0.63s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.58s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 unpause
--- PASS: TestErrorSpam/unpause (0.58s)

                                                
                                    
x
+
TestErrorSpam/stop (12.25s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 stop: (12.083209s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-940000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-940000 stop
--- PASS: TestErrorSpam/stop (12.25s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/16718-868/.minikube/files/etc/test/nested/copy/1313/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (42.75s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-822000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-822000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (42.751850417s)
--- PASS: TestFunctional/serial/StartWithProxy (42.75s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.04s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-822000 --alsologtostderr -v=8
E0615 10:15:00.526444    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt: no such file or directory
E0615 10:15:00.534664    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt: no such file or directory
E0615 10:15:00.545231    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt: no such file or directory
E0615 10:15:00.566614    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt: no such file or directory
E0615 10:15:00.608696    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt: no such file or directory
E0615 10:15:00.690849    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt: no such file or directory
E0615 10:15:00.852967    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt: no such file or directory
E0615 10:15:01.175233    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt: no such file or directory
E0615 10:15:01.817651    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt: no such file or directory
E0615 10:15:03.100060    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt: no such file or directory
E0615 10:15:05.662227    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt: no such file or directory
E0615 10:15:10.784423    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt: no such file or directory
E0615 10:15:21.026598    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-822000 --alsologtostderr -v=8: (38.04073425s)
functional_test.go:659: soft start took 38.04130625s for "functional-822000" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.04s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-822000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-822000 cache add registry.k8s.io/pause:3.1: (2.28434325s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-822000 cache add registry.k8s.io/pause:3.3: (1.993340875s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-822000 cache add registry.k8s.io/pause:latest: (1.582379708s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-822000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local623866105/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 cache add minikube-local-cache-test:functional-822000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 cache delete minikube-local-cache-test:functional-822000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-822000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-822000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (68.724542ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-arm64 -p functional-822000 cache reload: (1.057085584s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 kubectl -- --context functional-822000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.47s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.53s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-822000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.53s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.29s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-822000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0615 10:15:41.507817    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-822000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.291102167s)
functional_test.go:757: restart took 37.291223042s for "functional-822000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.29s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-822000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd2688895591/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.95s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-822000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-822000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-822000: exit status 115 (151.133041ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31599 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-822000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.95s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-822000 config get cpus: exit status 14 (29.114667ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-822000 config get cpus: exit status 14 (28.366375ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-822000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-822000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2975: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.27s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-822000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-822000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (111.733375ms)

                                                
                                                
-- stdout --
	* [functional-822000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:17:03.828341    2963 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:17:03.828455    2963 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:17:03.828457    2963 out.go:309] Setting ErrFile to fd 2...
	I0615 10:17:03.828460    2963 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:17:03.828536    2963 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:17:03.829552    2963 out.go:303] Setting JSON to false
	I0615 10:17:03.844852    2963 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2794,"bootTime":1686846629,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:17:03.844943    2963 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:17:03.850033    2963 out.go:177] * [functional-822000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0615 10:17:03.858075    2963 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:17:03.858132    2963 notify.go:220] Checking for updates...
	I0615 10:17:03.865038    2963 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:17:03.868050    2963 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:17:03.871056    2963 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:17:03.874060    2963 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:17:03.877098    2963 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:17:03.880179    2963 config.go:182] Loaded profile config "functional-822000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:17:03.880404    2963 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:17:03.885031    2963 out.go:177] * Using the qemu2 driver based on existing profile
	I0615 10:17:03.891980    2963 start.go:297] selected driver: qemu2
	I0615 10:17:03.891984    2963 start.go:884] validating driver "qemu2" against &{Name:functional-822000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:functional-822000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:17:03.892039    2963 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:17:03.898015    2963 out.go:177] 
	W0615 10:17:03.900977    2963 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0615 10:17:03.905001    2963 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-822000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-822000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-822000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (110.471291ms)

                                                
                                                
-- stdout --
	* [functional-822000] minikube v1.30.1 sur Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0615 10:17:03.713366    2959 out.go:296] Setting OutFile to fd 1 ...
	I0615 10:17:03.713461    2959 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:17:03.713464    2959 out.go:309] Setting ErrFile to fd 2...
	I0615 10:17:03.713467    2959 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0615 10:17:03.713556    2959 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
	I0615 10:17:03.715036    2959 out.go:303] Setting JSON to false
	I0615 10:17:03.732955    2959 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2794,"bootTime":1686846629,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0615 10:17:03.733040    2959 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0615 10:17:03.738023    2959 out.go:177] * [functional-822000] minikube v1.30.1 sur Darwin 13.4 (arm64)
	I0615 10:17:03.745068    2959 out.go:177]   - MINIKUBE_LOCATION=16718
	I0615 10:17:03.745141    2959 notify.go:220] Checking for updates...
	I0615 10:17:03.752932    2959 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	I0615 10:17:03.755978    2959 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0615 10:17:03.759048    2959 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0615 10:17:03.760485    2959 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	I0615 10:17:03.764041    2959 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0615 10:17:03.767325    2959 config.go:182] Loaded profile config "functional-822000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
	I0615 10:17:03.767548    2959 driver.go:373] Setting default libvirt URI to qemu:///system
	I0615 10:17:03.771876    2959 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0615 10:17:03.779025    2959 start.go:297] selected driver: qemu2
	I0615 10:17:03.779030    2959 start.go:884] validating driver "qemu2" against &{Name:functional-822000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15665/minikube-v1.30.1-1686713055-15665-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.3 ClusterName:functional-822000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0615 10:17:03.779085    2959 start.go:895] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0615 10:17:03.785011    2959 out.go:177] 
	W0615 10:17:03.789019    2959 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0615 10:17:03.793013    2959 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [1a8f47b4-1068-49ad-9337-22d29b70613c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.018011583s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-822000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-822000 apply -f testdata/storage-provisioner/pvc.yaml
E0615 10:16:22.470041    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt: no such file or directory
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-822000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-822000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b9eafdaf-640b-480f-97dd-5786d6eea892] Pending
helpers_test.go:344: "sp-pod" [b9eafdaf-640b-480f-97dd-5786d6eea892] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b9eafdaf-640b-480f-97dd-5786d6eea892] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.008629625s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-822000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-822000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-822000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6529b630-b8e7-445e-b2d2-b75176dd438b] Pending
helpers_test.go:344: "sp-pod" [6529b630-b8e7-445e-b2d2-b75176dd438b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [6529b630-b8e7-445e-b2d2-b75176dd438b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.013063042s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-822000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.12s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh -n functional-822000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 cp functional-822000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd2988643683/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh -n functional-822000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1313/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh "sudo cat /etc/test/nested/copy/1313/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1313.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh "sudo cat /etc/ssl/certs/1313.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1313.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh "sudo cat /usr/share/ca-certificates/1313.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/13132.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh "sudo cat /etc/ssl/certs/13132.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/13132.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh "sudo cat /usr/share/ca-certificates/13132.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-822000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-822000 ssh "sudo systemctl is-active crio": exit status 1 (115.309458ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-822000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-822000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-822000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-822000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2789: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-822000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-822000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [34986191-e8d3-4528-b112-00bc3aaa497a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [34986191-e8d3-4528-b112-00bc3aaa497a] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.006503292s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-822000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.119.161 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-822000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-822000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-822000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-7b684b55f9-hj7g8" [546a4f77-389e-406e-a10d-4e3e67479f3c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-7b684b55f9-hj7g8" [546a4f77-389e-406e-a10d-4e3e67479f3c] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.010525458s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 service list -o json
functional_test.go:1493: Took "292.387875ms" to run "out/minikube-darwin-arm64 -p functional-822000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.105.4:31954
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.105.4:31954
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1314: Took "113.594833ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1328: Took "32.690041ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1365: Took "115.005417ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1378: Took "31.821667ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-822000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1384749978/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1686849409973049000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1384749978/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1686849409973049000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1384749978/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1686849409973049000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1384749978/001/test-1686849409973049000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-822000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (63.259041ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun 15 17:16 created-by-test
-rw-r--r-- 1 docker docker 24 Jun 15 17:16 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun 15 17:16 test-1686849409973049000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh cat /mount-9p/test-1686849409973049000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-822000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [a2ca6f4d-715a-4df7-a5fd-ee8974f37277] Pending
helpers_test.go:344: "busybox-mount" [a2ca6f4d-715a-4df7-a5fd-ee8974f37277] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [a2ca6f4d-715a-4df7-a5fd-ee8974f37277] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [a2ca6f4d-715a-4df7-a5fd-ee8974f37277] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.006241625s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-822000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-822000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1384749978/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-822000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3094625612/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-822000 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (62.752334ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/functional-822000/monitor: connect: connection refused
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_mount_db20d1296162590bdf2400ddd77c8320c39ea614_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-822000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3094625612/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-822000 ssh "sudo umount -f /mount-9p": exit status 1 (67.764042ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-822000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-822000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3094625612/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-822000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.3
registry.k8s.io/kube-proxy:v1.27.3
registry.k8s.io/kube-controller-manager:v1.27.3
registry.k8s.io/kube-apiserver:v1.27.3
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-822000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-822000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-822000 image ls --format short --alsologtostderr:
I0615 10:17:21.206108    3114 out.go:296] Setting OutFile to fd 1 ...
I0615 10:17:21.206249    3114 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0615 10:17:21.206252    3114 out.go:309] Setting ErrFile to fd 2...
I0615 10:17:21.206255    3114 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0615 10:17:21.206335    3114 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
I0615 10:17:21.206750    3114 config.go:182] Loaded profile config "functional-822000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0615 10:17:21.206807    3114 config.go:182] Loaded profile config "functional-822000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0615 10:17:21.207672    3114 ssh_runner.go:195] Run: systemctl --version
I0615 10:17:21.207683    3114 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/functional-822000/id_rsa Username:docker}
I0615 10:17:21.238680    3114 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-822000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.27.3           | bcb9e554eaab6 | 56.2MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/etcd                        | 3.5.7-0           | 24bc64e911039 | 181MB  |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| gcr.io/google-containers/addon-resizer      | functional-822000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-822000 | 5a0f5d8b2dcff | 30B    |
| docker.io/library/nginx                     | alpine            | 66bf2c914bf4d | 41MB   |
| registry.k8s.io/kube-controller-manager     | v1.27.3           | ab3683b584ae5 | 107MB  |
| registry.k8s.io/kube-proxy                  | v1.27.3           | fb73e92641fd5 | 66.5MB |
| docker.io/library/nginx                     | latest            | 2d21d843073b4 | 192MB  |
| registry.k8s.io/kube-apiserver              | v1.27.3           | 39dfb036b0986 | 115MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-822000 image ls --format table --alsologtostderr:
I0615 10:17:21.381127    3124 out.go:296] Setting OutFile to fd 1 ...
I0615 10:17:21.381250    3124 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0615 10:17:21.381254    3124 out.go:309] Setting ErrFile to fd 2...
I0615 10:17:21.381256    3124 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0615 10:17:21.381332    3124 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
I0615 10:17:21.381709    3124 config.go:182] Loaded profile config "functional-822000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0615 10:17:21.381765    3124 config.go:182] Loaded profile config "functional-822000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0615 10:17:21.382507    3124 ssh_runner.go:195] Run: systemctl --version
I0615 10:17:21.382516    3124 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/functional-822000/id_rsa Username:docker}
I0615 10:17:21.412076    3124 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-822000 image ls --format json --alsologtostderr:
[{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"fb73e92641fd5ab6e5494f0c583616af0bdcc20bc15e4ec7a2e456190f11909a","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.27.3"],"size":"66500000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.7-0"],"size":"181000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"
repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"39dfb036b0986d18c80ba0cc45d2fd7256751d89ce9a477aac067fad9e14c473","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.27.3"],"size":"115000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-822000"],"size":"32900000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags"
:["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"ab3683b584ae5afe0953106b2d8e470280eda3375ccd715feb601acc2d0611b8","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.27.3"],"size":"107000000"},{"id":"66bf2c914bf4d0aac4b62f09f9f74ad35898d613024a0f2ec94dca9e79fac6ea","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"41000000"},{"id":"bcb9e554eaab606a5237b493a9a52ec01b8c29a4e49306184916b1bffca38540","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.27.3"],"size":"56200000"},{"id":"2d21d843073b4df6a03022861da4cb59f7116c864fe90b3b5db3b90e1ce932d3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"5a0f5d8b2dcff99d872643e9a5b51a95b334fbc24344be0b0ad81c25cfd6851a","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functio
nal-822000"],"size":"30"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-822000 image ls --format json --alsologtostderr:
I0615 10:17:21.306388    3120 out.go:296] Setting OutFile to fd 1 ...
I0615 10:17:21.306688    3120 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0615 10:17:21.306696    3120 out.go:309] Setting ErrFile to fd 2...
I0615 10:17:21.306698    3120 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0615 10:17:21.306788    3120 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
I0615 10:17:21.307257    3120 config.go:182] Loaded profile config "functional-822000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0615 10:17:21.307315    3120 config.go:182] Loaded profile config "functional-822000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0615 10:17:21.308116    3120 ssh_runner.go:195] Run: systemctl --version
I0615 10:17:21.308126    3120 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/functional-822000/id_rsa Username:docker}
I0615 10:17:21.338799    3120 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-822000 ssh pgrep buildkitd: exit status 1 (73.559625ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 image build -t localhost/my-image:functional-822000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-822000 image build -t localhost/my-image:functional-822000 testdata/build --alsologtostderr: (2.416942542s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-822000 image build -t localhost/my-image:functional-822000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in 4dab0ef64b33
Removing intermediate container 4dab0ef64b33
---> 02d2342e2026
Step 3/3 : ADD content.txt /
---> 732865af1ea6
Successfully built 732865af1ea6
Successfully tagged localhost/my-image:functional-822000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-822000 image build -t localhost/my-image:functional-822000 testdata/build --alsologtostderr:
I0615 10:17:21.316866    3121 out.go:296] Setting OutFile to fd 1 ...
I0615 10:17:21.317058    3121 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0615 10:17:21.317062    3121 out.go:309] Setting ErrFile to fd 2...
I0615 10:17:21.317064    3121 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0615 10:17:21.317134    3121 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/16718-868/.minikube/bin
I0615 10:17:21.317529    3121 config.go:182] Loaded profile config "functional-822000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0615 10:17:21.318232    3121 config.go:182] Loaded profile config "functional-822000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.3
I0615 10:17:21.318988    3121 ssh_runner.go:195] Run: systemctl --version
I0615 10:17:21.318996    3121 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/16718-868/.minikube/machines/functional-822000/id_rsa Username:docker}
I0615 10:17:21.348297    3121 build_images.go:151] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2837750461.tar
I0615 10:17:21.348343    3121 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0615 10:17:21.351146    3121 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2837750461.tar
I0615 10:17:21.352974    3121 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2837750461.tar: stat -c "%s %y" /var/lib/minikube/build/build.2837750461.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2837750461.tar': No such file or directory
I0615 10:17:21.352996    3121 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2837750461.tar --> /var/lib/minikube/build/build.2837750461.tar (3072 bytes)
I0615 10:17:21.360778    3121 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2837750461
I0615 10:17:21.364685    3121 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2837750461 -xf /var/lib/minikube/build/build.2837750461.tar
I0615 10:17:21.368194    3121 docker.go:339] Building image: /var/lib/minikube/build/build.2837750461
I0615 10:17:21.368257    3121 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-822000 /var/lib/minikube/build/build.2837750461
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0615 10:17:23.689899    3121 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-822000 /var/lib/minikube/build/build.2837750461: (2.323536459s)
I0615 10:17:23.689974    3121 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2837750461
I0615 10:17:23.692817    3121 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2837750461.tar
I0615 10:17:23.695599    3121 build_images.go:207] Built localhost/my-image:functional-822000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2837750461.tar
I0615 10:17:23.695619    3121 build_images.go:123] succeeded building to: functional-822000
I0615 10:17:23.695623    3121 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.713817125s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-822000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 image load --daemon gcr.io/google-containers/addon-resizer:functional-822000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-822000 image load --daemon gcr.io/google-containers/addon-resizer:functional-822000 --alsologtostderr: (2.230710792s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 image load --daemon gcr.io/google-containers/addon-resizer:functional-822000 --alsologtostderr
2023/06/15 10:17:13 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-822000 image load --daemon gcr.io/google-containers/addon-resizer:functional-822000 --alsologtostderr: (1.457798917s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.457649125s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-822000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 image load --daemon gcr.io/google-containers/addon-resizer:functional-822000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-822000 image load --daemon gcr.io/google-containers/addon-resizer:functional-822000 --alsologtostderr: (1.920658583s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.49s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-822000 docker-env) && out/minikube-darwin-arm64 status -p functional-822000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-822000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 image save gcr.io/google-containers/addon-resizer:functional-822000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 image rm gcr.io/google-containers/addon-resizer:functional-822000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-822000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 image save --daemon gcr.io/google-containers/addon-resizer:functional-822000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-arm64 -p functional-822000 image save --daemon gcr.io/google-containers/addon-resizer:functional-822000 --alsologtostderr: (1.541170709s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-822000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.63s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-822000
--- PASS: TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-822000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-822000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (31.78s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-116000 --driver=qemu2 
E0615 10:17:44.359389    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/addons-477000/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-116000 --driver=qemu2 : (31.778143625s)
--- PASS: TestImageBuild/serial/Setup (31.78s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.99s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-116000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-116000: (1.992608875s)
--- PASS: TestImageBuild/serial/NormalBuild (1.99s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.14s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-116000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.14s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-116000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (83.51s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-arm64 start -p ingress-addon-legacy-422000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-arm64 start -p ingress-addon-legacy-422000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 : (1m23.507260708s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (83.51s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.31s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-422000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-422000 addons enable ingress --alsologtostderr -v=5: (16.306322792s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (16.31s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.21s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-422000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.21s)

                                                
                                    
x
+
TestJSONOutput/start/Command (46s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-907000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
E0615 10:21:17.272035    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/functional-822000/client.crt: no such file or directory
E0615 10:21:17.278482    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/functional-822000/client.crt: no such file or directory
E0615 10:21:17.290642    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/functional-822000/client.crt: no such file or directory
E0615 10:21:17.312780    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/functional-822000/client.crt: no such file or directory
E0615 10:21:17.354872    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/functional-822000/client.crt: no such file or directory
E0615 10:21:17.437036    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/functional-822000/client.crt: no such file or directory
E0615 10:21:17.599114    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/functional-822000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 start -p json-output-907000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : (46.000674042s)
--- PASS: TestJSONOutput/start/Command (46.00s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.3s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-907000 --output=json --user=testUser
E0615 10:21:17.921236    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/functional-822000/client.crt: no such file or directory
--- PASS: TestJSONOutput/pause/Command (0.30s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.24s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-907000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.24s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (9.08s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-907000 --output=json --user=testUser
E0615 10:21:18.563499    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/functional-822000/client.crt: no such file or directory
E0615 10:21:19.845825    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/functional-822000/client.crt: no such file or directory
E0615 10:21:22.408310    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/functional-822000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-907000 --output=json --user=testUser: (9.078358292s)
--- PASS: TestJSONOutput/stop/Command (9.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-032000 --memory=2200 --output=json --wait=true --driver=fail
E0615 10:21:27.529386    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/functional-822000/client.crt: no such file or directory
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-032000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (92.856625ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0b887445-5520-4c81-9630-fbf560a810c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-032000] minikube v1.30.1 on Darwin 13.4 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6a112d9b-a2f4-4ee9-92c9-8a0ce4ca4d7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16718"}}
	{"specversion":"1.0","id":"9a977dc7-139b-48f5-8fcc-9e922ac8c377","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig"}}
	{"specversion":"1.0","id":"aedae555-a4da-425c-9b9b-5e40182e5215","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"b7f62ef8-0efc-4836-8a95-54ff78dbacef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4dc3f2a5-638e-4d9d-bdd8-8ab015fe715f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube"}}
	{"specversion":"1.0","id":"f8fc9b15-164d-4686-8e5d-fa2f01596bd1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a8ee7697-0e7d-4d16-b831-1c7a378e0730","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-032000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-032000
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestMinikubeProfile (61.97s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-881000 --driver=qemu2 
E0615 10:21:37.771380    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/functional-822000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-881000 --driver=qemu2 : (29.326643084s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-883000 --driver=qemu2 
E0615 10:21:58.253265    1313 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/16718-868/.minikube/profiles/functional-822000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-883000 --driver=qemu2 : (31.890834375s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-881000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-883000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-883000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-883000
helpers_test.go:175: Cleaning up "first-881000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-881000
--- PASS: TestMinikubeProfile (61.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-750000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-750000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (91.710833ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-750000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=16718
	  - KUBECONFIG=/Users/jenkins/minikube-integration/16718-868/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/16718-868/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-750000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-750000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (42.174542ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-750000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-750000
--- PASS: TestNoKubernetes/serial/Stop (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-750000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-750000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (40.461792ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-750000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-252000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-252000 -n old-k8s-version-252000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-252000 -n old-k8s-version-252000: exit status 7 (27.827958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-252000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-418000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-418000 -n embed-certs-418000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-418000 -n embed-certs-418000: exit status 7 (27.481ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-418000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-084000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-084000 -n no-preload-084000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-084000 -n no-preload-084000: exit status 7 (28.342625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-084000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-832000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-832000 -n default-k8s-diff-port-832000: exit status 7 (27.758333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-832000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-772000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-772000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-772000 -n newest-cni-772000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-772000 -n newest-cni-772000: exit status 7 (28.390333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-772000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/254)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.3/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:420: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (9.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-822000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2249981199/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-822000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2249981199/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-822000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2249981199/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-822000 ssh "findmnt -T" /mount1: exit status 80 (78.1375ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/16718-868/.minikube/machines/functional-822000/monitor: connect: connection refused
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_mount_20d7d9447a3b4d543303ca76e79f8f75e7d9e454_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-822000 ssh "findmnt -T" /mount2: exit status 1 (59.388ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-822000 ssh "findmnt -T" /mount2: exit status 1 (61.169667ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-822000 ssh "findmnt -T" /mount2: exit status 1 (59.257417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-822000 ssh "findmnt -T" /mount2: exit status 1 (60.469375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-822000 ssh "findmnt -T" /mount2: exit status 1 (64.358958ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-822000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-822000 ssh "findmnt -T" /mount2: exit status 1 (60.899292ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-822000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2249981199/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-822000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2249981199/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-822000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2249981199/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (9.02s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-678000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-678000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-678000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-678000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-678000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-678000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-678000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-678000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-678000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-678000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-678000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-678000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-678000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-678000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-678000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-678000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-678000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-678000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-678000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-678000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-678000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-678000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-678000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-678000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-678000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-678000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-678000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-678000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-678000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-678000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-678000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-678000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-678000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-678000"

                                                
                                                
----------------------- debugLogs end: cilium-678000 [took: 2.136364084s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-678000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-678000
--- SKIP: TestNetworkPlugins/group/cilium (2.38s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-141000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-141000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
Copied to clipboard