Test Report: QEMU_macOS 17719

                    
                      e08a2828f2be3e524baaf41342316dad88935561:2023-12-07:32188
                    
                

Test fail (88/266)

Order failed test Duration
3 TestDownloadOnly/v1.16.0/json-events 18.8
7 TestDownloadOnly/v1.16.0/kubectl 0
27 TestOffline 10.03
35 TestAddons/parallel/Ingress 33.57
49 TestCertOptions 10.1
50 TestCertExpiration 195.46
51 TestDockerFlags 10.24
52 TestForceSystemdFlag 11.03
53 TestForceSystemdEnv 10.26
59 TestErrorSpam/setup 19.66
98 TestFunctional/parallel/ServiceCmdConnect 31.4
165 TestImageBuild/serial/BuildWithBuildArg 1.08
174 TestIngressAddonLegacy/serial/ValidateIngressAddons 54.85
209 TestMountStart/serial/StartWithMountFirst 10.22
212 TestMultiNode/serial/FreshStart2Nodes 9.81
213 TestMultiNode/serial/DeployApp2Nodes 88.75
214 TestMultiNode/serial/PingHostFrom2Pods 0.09
215 TestMultiNode/serial/AddNode 0.08
216 TestMultiNode/serial/MultiNodeLabels 0.06
217 TestMultiNode/serial/ProfileList 0.1
218 TestMultiNode/serial/CopyFile 0.06
219 TestMultiNode/serial/StopNode 0.15
220 TestMultiNode/serial/StartAfterStop 0.11
221 TestMultiNode/serial/RestartKeepsNodes 5.38
222 TestMultiNode/serial/DeleteNode 0.11
223 TestMultiNode/serial/StopMultiNode 0.16
224 TestMultiNode/serial/RestartMultiNode 5.25
225 TestMultiNode/serial/ValidateNameConflict 19.75
229 TestPreload 10.04
231 TestScheduledStopUnix 10.18
232 TestSkaffold 11.9
235 TestRunningBinaryUpgrade 155.81
237 TestKubernetesUpgrade 15.59
250 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.11
251 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.36
252 TestStoppedBinaryUpgrade/Setup 156.14
254 TestPause/serial/Start 9.89
264 TestNoKubernetes/serial/StartWithK8s 9.86
265 TestNoKubernetes/serial/StartWithStopK8s 5.32
266 TestNoKubernetes/serial/Start 5.32
270 TestNoKubernetes/serial/StartNoArgs 5.31
272 TestNetworkPlugins/group/auto/Start 9.78
273 TestNetworkPlugins/group/kindnet/Start 9.96
274 TestNetworkPlugins/group/flannel/Start 9.76
275 TestNetworkPlugins/group/enable-default-cni/Start 9.84
276 TestNetworkPlugins/group/bridge/Start 9.82
277 TestNetworkPlugins/group/kubenet/Start 9.8
278 TestNetworkPlugins/group/custom-flannel/Start 9.93
279 TestNetworkPlugins/group/calico/Start 9.8
280 TestNetworkPlugins/group/false/Start 9.78
282 TestStartStop/group/old-k8s-version/serial/FirstStart 9.95
283 TestStoppedBinaryUpgrade/Upgrade 2.82
284 TestStoppedBinaryUpgrade/MinikubeLogs 0.12
286 TestStartStop/group/no-preload/serial/FirstStart 9.99
287 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
288 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
291 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
292 TestStartStop/group/no-preload/serial/DeployApp 0.09
293 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
296 TestStartStop/group/no-preload/serial/SecondStart 5.28
297 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
300 TestStartStop/group/old-k8s-version/serial/Pause 0.11
302 TestStartStop/group/embed-certs/serial/FirstStart 10.41
303 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
304 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
305 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
306 TestStartStop/group/no-preload/serial/Pause 0.1
308 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.05
309 TestStartStop/group/embed-certs/serial/DeployApp 0.09
310 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
313 TestStartStop/group/embed-certs/serial/SecondStart 5.21
314 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
315 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
318 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.28
319 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.04
320 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
321 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
322 TestStartStop/group/embed-certs/serial/Pause 0.1
324 TestStartStop/group/newest-cni/serial/FirstStart 9.81
325 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
326 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
327 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
328 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
333 TestStartStop/group/newest-cni/serial/SecondStart 5.27
336 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
337 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.16.0/json-events (18.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-080000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-080000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 : exit status 40 (18.794921083s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2c6c9b5b-503a-4188-820e-74fb9e347a4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-080000] minikube v1.32.0 on Darwin 14.1.2 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9f6c5374-a35a-494e-8712-14e0be72c2b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17719"}}
	{"specversion":"1.0","id":"2366e474-e5c7-41c5-80cc-7b856bedc44a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig"}}
	{"specversion":"1.0","id":"56e51f48-0de6-431c-881b-7a4525db1655","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"979cb49f-ef6d-4441-a5bc-6833c941ae1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b666a823-88c9-4bbb-a26e-eb614794ff36","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube"}}
	{"specversion":"1.0","id":"b9ad4bfa-cd87-48bb-a00d-d48ac6660078","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"52ce87e1-91bb-473e-a196-6419356a0ea4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"1814c8d7-aae1-4ff0-8956-3ee45b3aa124","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"5a909d4a-6fc0-474d-b32f-f068e16661b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"5a35af1c-8363-4eed-ad39-d8df72309136","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-080000 in cluster download-only-080000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f2eb4812-015c-4a4f-89ea-c51999c0a0bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.16.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"8206f559-934c-4bea-80cd-3deccfdd6867","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17719-1328/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108a1ca80 0x108a1ca80 0x108a1ca80 0x108a1ca80 0x108a1ca80 0x108a1ca80 0x108a1ca80] Decompressors:map[bz2:0x1400080cca0 gz:0x1400080cca8 tar:0x1400080cc50 tar.bz2:0x1400080cc60 tar.gz:0x1400080cc70 tar.xz:0x1400080cc80 tar.zst:0x1400080cc90 tbz2:0x1400080cc60 tgz:0x140008
0cc70 txz:0x1400080cc80 tzst:0x1400080cc90 xz:0x1400080ccb0 zip:0x1400080ccc0 zst:0x1400080ccb8] Getters:map[file:0x14002144570 http:0x14000518230 https:0x14000518280] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"806f8b28-21f1-4efb-88e5-eee884a4853e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:00:15.740971    1770 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:00:15.741147    1770 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:00:15.741150    1770 out.go:309] Setting ErrFile to fd 2...
	I1207 12:00:15.741152    1770 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:00:15.741314    1770 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	W1207 12:00:15.741397    1770 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17719-1328/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17719-1328/.minikube/config/config.json: no such file or directory
	I1207 12:00:15.742594    1770 out.go:303] Setting JSON to true
	I1207 12:00:15.759710    1770 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1786,"bootTime":1701977429,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:00:15.759800    1770 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:00:15.765519    1770 out.go:97] [download-only-080000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:00:15.769499    1770 out.go:169] MINIKUBE_LOCATION=17719
	I1207 12:00:15.765623    1770 notify.go:220] Checking for updates...
	W1207 12:00:15.765640    1770 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball: no such file or directory
	I1207 12:00:15.776538    1770 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:00:15.779549    1770 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:00:15.782563    1770 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:00:15.785557    1770 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	W1207 12:00:15.791489    1770 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1207 12:00:15.791669    1770 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:00:15.798495    1770 out.go:97] Using the qemu2 driver based on user configuration
	I1207 12:00:15.798506    1770 start.go:298] selected driver: qemu2
	I1207 12:00:15.798508    1770 start.go:902] validating driver "qemu2" against <nil>
	I1207 12:00:15.798575    1770 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 12:00:15.803433    1770 out.go:169] Automatically selected the socket_vmnet network
	I1207 12:00:15.810344    1770 start_flags.go:394] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1207 12:00:15.810424    1770 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1207 12:00:15.810528    1770 cni.go:84] Creating CNI manager for ""
	I1207 12:00:15.810544    1770 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1207 12:00:15.810548    1770 start_flags.go:323] config:
	{Name:download-only-080000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-080000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:00:15.816104    1770 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:00:15.820345    1770 out.go:97] Downloading VM boot image ...
	I1207 12:00:15.820359    1770 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso
	I1207 12:00:23.213934    1770 out.go:97] Starting control plane node download-only-080000 in cluster download-only-080000
	I1207 12:00:23.213959    1770 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1207 12:00:23.272647    1770 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1207 12:00:23.272674    1770 cache.go:56] Caching tarball of preloaded images
	I1207 12:00:23.272812    1770 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1207 12:00:23.276882    1770 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1207 12:00:23.276889    1770 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1207 12:00:23.356383    1770 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1207 12:00:32.969728    1770 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1207 12:00:32.969894    1770 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1207 12:00:33.613776    1770 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1207 12:00:33.613989    1770 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/download-only-080000/config.json ...
	I1207 12:00:33.614005    1770 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/download-only-080000/config.json: {Name:mk5e2a90cd9a8bee2269d74db23564da3145f35a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:00:33.614239    1770 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1207 12:00:33.614417    1770 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I1207 12:00:34.460524    1770 out.go:169] 
	W1207 12:00:34.468559    1770 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17719-1328/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108a1ca80 0x108a1ca80 0x108a1ca80 0x108a1ca80 0x108a1ca80 0x108a1ca80 0x108a1ca80] Decompressors:map[bz2:0x1400080cca0 gz:0x1400080cca8 tar:0x1400080cc50 tar.bz2:0x1400080cc60 tar.gz:0x1400080cc70 tar.xz:0x1400080cc80 tar.zst:0x1400080cc90 tbz2:0x1400080cc60 tgz:0x1400080cc70 txz:0x1400080cc80 tzst:0x1400080cc90 xz:0x1400080ccb0 zip:0x1400080ccc0 zst:0x1400080ccb8] Getters:map[file:0x14002144570 http:0x14000518230 https:0x14000518280] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1207 12:00:34.468593    1770 out_reason.go:110] 
	W1207 12:00:34.475490    1770 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:00:34.478360    1770 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:71: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-080000" "--force" "--alsologtostderr" "--kubernetes-version=v1.16.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.16.0/json-events (18.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:163: expected the file for binary exist at "/Users/jenkins/minikube-integration/17719-1328/.minikube/cache/darwin/arm64/v1.16.0/kubectl" but got error stat /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/darwin/arm64/v1.16.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.03s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-068000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-068000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.856336583s)

                                                
                                                
-- stdout --
	* [offline-docker-068000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node offline-docker-068000 in cluster offline-docker-068000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-068000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:19:42.495111    3642 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:19:42.495270    3642 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:19:42.495273    3642 out.go:309] Setting ErrFile to fd 2...
	I1207 12:19:42.495276    3642 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:19:42.495415    3642 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:19:42.496493    3642 out.go:303] Setting JSON to false
	I1207 12:19:42.513846    3642 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2953,"bootTime":1701977429,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:19:42.513938    3642 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:19:42.518909    3642 out.go:177] * [offline-docker-068000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:19:42.526827    3642 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:19:42.526832    3642 notify.go:220] Checking for updates...
	I1207 12:19:42.533742    3642 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:19:42.536843    3642 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:19:42.539857    3642 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:19:42.547781    3642 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:19:42.550855    3642 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:19:42.554213    3642 config.go:182] Loaded profile config "multinode-554000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:19:42.554271    3642 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:19:42.557841    3642 out.go:177] * Using the qemu2 driver based on user configuration
	I1207 12:19:42.564780    3642 start.go:298] selected driver: qemu2
	I1207 12:19:42.564788    3642 start.go:902] validating driver "qemu2" against <nil>
	I1207 12:19:42.564799    3642 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:19:42.566848    3642 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 12:19:42.569841    3642 out.go:177] * Automatically selected the socket_vmnet network
	I1207 12:19:42.572902    3642 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 12:19:42.572940    3642 cni.go:84] Creating CNI manager for ""
	I1207 12:19:42.572947    3642 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 12:19:42.572950    3642 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1207 12:19:42.572958    3642 start_flags.go:323] config:
	{Name:offline-docker-068000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-068000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:19:42.577643    3642 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:19:42.584866    3642 out.go:177] * Starting control plane node offline-docker-068000 in cluster offline-docker-068000
	I1207 12:19:42.588808    3642 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 12:19:42.588846    3642 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1207 12:19:42.588857    3642 cache.go:56] Caching tarball of preloaded images
	I1207 12:19:42.588932    3642 preload.go:174] Found /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 12:19:42.588939    3642 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1207 12:19:42.589013    3642 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/offline-docker-068000/config.json ...
	I1207 12:19:42.589025    3642 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/offline-docker-068000/config.json: {Name:mk135cc0bf7dfd98cf020ccd687777b07e8b6422 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:19:42.589343    3642 start.go:365] acquiring machines lock for offline-docker-068000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:19:42.589387    3642 start.go:369] acquired machines lock for "offline-docker-068000" in 30.416µs
	I1207 12:19:42.589399    3642 start.go:93] Provisioning new machine with config: &{Name:offline-docker-068000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-068000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:19:42.589452    3642 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:19:42.597778    3642 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1207 12:19:42.612743    3642 start.go:159] libmachine.API.Create for "offline-docker-068000" (driver="qemu2")
	I1207 12:19:42.612771    3642 client.go:168] LocalClient.Create starting
	I1207 12:19:42.612850    3642 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:19:42.612879    3642 main.go:141] libmachine: Decoding PEM data...
	I1207 12:19:42.612890    3642 main.go:141] libmachine: Parsing certificate...
	I1207 12:19:42.612931    3642 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:19:42.612953    3642 main.go:141] libmachine: Decoding PEM data...
	I1207 12:19:42.612962    3642 main.go:141] libmachine: Parsing certificate...
	I1207 12:19:42.613320    3642 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:19:42.739111    3642 main.go:141] libmachine: Creating SSH key...
	I1207 12:19:42.889317    3642 main.go:141] libmachine: Creating Disk image...
	I1207 12:19:42.889328    3642 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:19:42.889498    3642 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/offline-docker-068000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/offline-docker-068000/disk.qcow2
	I1207 12:19:42.901845    3642 main.go:141] libmachine: STDOUT: 
	I1207 12:19:42.901926    3642 main.go:141] libmachine: STDERR: 
	I1207 12:19:42.901993    3642 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/offline-docker-068000/disk.qcow2 +20000M
	I1207 12:19:42.913608    3642 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:19:42.913628    3642 main.go:141] libmachine: STDERR: 
	I1207 12:19:42.913649    3642 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/offline-docker-068000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/offline-docker-068000/disk.qcow2
	I1207 12:19:42.913656    3642 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:19:42.913695    3642 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/offline-docker-068000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/offline-docker-068000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/offline-docker-068000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:5e:ff:a7:16:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/offline-docker-068000/disk.qcow2
	I1207 12:19:42.915521    3642 main.go:141] libmachine: STDOUT: 
	I1207 12:19:42.915538    3642 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:19:42.915559    3642 client.go:171] LocalClient.Create took 302.781209ms
	I1207 12:19:44.917581    3642 start.go:128] duration metric: createHost completed in 2.328167083s
	I1207 12:19:44.917595    3642 start.go:83] releasing machines lock for "offline-docker-068000", held for 2.3282465s
	W1207 12:19:44.917603    3642 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:19:44.921007    3642 out.go:177] * Deleting "offline-docker-068000" in qemu2 ...
	W1207 12:19:44.937775    3642 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:19:44.937783    3642 start.go:709] Will try again in 5 seconds ...
	I1207 12:19:49.939981    3642 start.go:365] acquiring machines lock for offline-docker-068000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:19:49.940430    3642 start.go:369] acquired machines lock for "offline-docker-068000" in 322.333µs
	I1207 12:19:49.940594    3642 start.go:93] Provisioning new machine with config: &{Name:offline-docker-068000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-068000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:19:49.940832    3642 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:19:49.948737    3642 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1207 12:19:49.999422    3642 start.go:159] libmachine.API.Create for "offline-docker-068000" (driver="qemu2")
	I1207 12:19:49.999486    3642 client.go:168] LocalClient.Create starting
	I1207 12:19:49.999635    3642 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:19:49.999721    3642 main.go:141] libmachine: Decoding PEM data...
	I1207 12:19:49.999739    3642 main.go:141] libmachine: Parsing certificate...
	I1207 12:19:49.999813    3642 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:19:49.999857    3642 main.go:141] libmachine: Decoding PEM data...
	I1207 12:19:49.999871    3642 main.go:141] libmachine: Parsing certificate...
	I1207 12:19:50.000397    3642 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:19:50.137710    3642 main.go:141] libmachine: Creating SSH key...
	I1207 12:19:50.246786    3642 main.go:141] libmachine: Creating Disk image...
	I1207 12:19:50.246794    3642 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:19:50.246984    3642 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/offline-docker-068000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/offline-docker-068000/disk.qcow2
	I1207 12:19:50.259004    3642 main.go:141] libmachine: STDOUT: 
	I1207 12:19:50.259030    3642 main.go:141] libmachine: STDERR: 
	I1207 12:19:50.259102    3642 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/offline-docker-068000/disk.qcow2 +20000M
	I1207 12:19:50.269403    3642 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:19:50.269420    3642 main.go:141] libmachine: STDERR: 
	I1207 12:19:50.269442    3642 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/offline-docker-068000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/offline-docker-068000/disk.qcow2
	I1207 12:19:50.269453    3642 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:19:50.269503    3642 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/offline-docker-068000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/offline-docker-068000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/offline-docker-068000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:22:41:06:00:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/offline-docker-068000/disk.qcow2
	I1207 12:19:50.271162    3642 main.go:141] libmachine: STDOUT: 
	I1207 12:19:50.271179    3642 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:19:50.271193    3642 client.go:171] LocalClient.Create took 271.706708ms
	I1207 12:19:52.273385    3642 start.go:128] duration metric: createHost completed in 2.332532625s
	I1207 12:19:52.273538    3642 start.go:83] releasing machines lock for "offline-docker-068000", held for 2.3331255s
	W1207 12:19:52.273949    3642 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-068000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-068000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:19:52.287727    3642 out.go:177] 
	W1207 12:19:52.291863    3642 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:19:52.291911    3642 out.go:239] * 
	* 
	W1207 12:19:52.294464    3642 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:19:52.306609    3642 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-068000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:523: *** TestOffline FAILED at 2023-12-07 12:19:52.321222 -0800 PST m=+1176.674705126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-068000 -n offline-docker-068000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-068000 -n offline-docker-068000: exit status 7 (72.378333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-068000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-068000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-068000
--- FAIL: TestOffline (10.03s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (33.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-210000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-210000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-210000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6ce0aa98-d938-4a8e-9c90-ada67e21a814] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6ce0aa98-d938-4a8e-9c90-ada67e21a814] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.011065s
addons_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p addons-210000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-210000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p addons-210000 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.2: exit status 1 (15.036245125s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.2" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-darwin-arm64 -p addons-210000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:310: (dbg) Run:  out/minikube-darwin-arm64 -p addons-210000 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-darwin-arm64 -p addons-210000 addons disable ingress --alsologtostderr -v=1: (7.226677208s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-210000 -n addons-210000
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-210000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-080000 | jenkins | v1.32.0 | 07 Dec 23 12:00 PST |                     |
	|         | -p download-only-080000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-080000 | jenkins | v1.32.0 | 07 Dec 23 12:00 PST |                     |
	|         | -p download-only-080000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| start   | -o=json --download-only                                                                     | download-only-080000 | jenkins | v1.32.0 | 07 Dec 23 12:01 PST |                     |
	|         | -p download-only-080000                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.1                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 07 Dec 23 12:02 PST | 07 Dec 23 12:02 PST |
	| delete  | -p download-only-080000                                                                     | download-only-080000 | jenkins | v1.32.0 | 07 Dec 23 12:02 PST | 07 Dec 23 12:02 PST |
	| delete  | -p download-only-080000                                                                     | download-only-080000 | jenkins | v1.32.0 | 07 Dec 23 12:02 PST | 07 Dec 23 12:02 PST |
	| start   | --download-only -p                                                                          | binary-mirror-032000 | jenkins | v1.32.0 | 07 Dec 23 12:02 PST |                     |
	|         | binary-mirror-032000                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49324                                                                      |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                                              |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-032000                                                                     | binary-mirror-032000 | jenkins | v1.32.0 | 07 Dec 23 12:02 PST | 07 Dec 23 12:02 PST |
	| addons  | enable dashboard -p                                                                         | addons-210000        | jenkins | v1.32.0 | 07 Dec 23 12:02 PST |                     |
	|         | addons-210000                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-210000        | jenkins | v1.32.0 | 07 Dec 23 12:02 PST |                     |
	|         | addons-210000                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-210000 --wait=true                                                                | addons-210000        | jenkins | v1.32.0 | 07 Dec 23 12:02 PST | 07 Dec 23 12:04 PST |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	| ip      | addons-210000 ip                                                                            | addons-210000        | jenkins | v1.32.0 | 07 Dec 23 12:04 PST | 07 Dec 23 12:04 PST |
	| addons  | addons-210000 addons disable                                                                | addons-210000        | jenkins | v1.32.0 | 07 Dec 23 12:04 PST | 07 Dec 23 12:04 PST |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-210000 addons                                                                        | addons-210000        | jenkins | v1.32.0 | 07 Dec 23 12:04 PST | 07 Dec 23 12:04 PST |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-210000        | jenkins | v1.32.0 | 07 Dec 23 12:04 PST | 07 Dec 23 12:04 PST |
	|         | addons-210000                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-210000 ssh curl -s                                                                   | addons-210000        | jenkins | v1.32.0 | 07 Dec 23 12:04 PST | 07 Dec 23 12:04 PST |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-210000 ip                                                                            | addons-210000        | jenkins | v1.32.0 | 07 Dec 23 12:04 PST | 07 Dec 23 12:04 PST |
	| addons  | addons-210000 addons                                                                        | addons-210000        | jenkins | v1.32.0 | 07 Dec 23 12:04 PST | 07 Dec 23 12:04 PST |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-210000 addons                                                                        | addons-210000        | jenkins | v1.32.0 | 07 Dec 23 12:04 PST | 07 Dec 23 12:05 PST |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-210000 addons disable                                                                | addons-210000        | jenkins | v1.32.0 | 07 Dec 23 12:05 PST | 07 Dec 23 12:05 PST |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-210000 addons disable                                                                | addons-210000        | jenkins | v1.32.0 | 07 Dec 23 12:05 PST | 07 Dec 23 12:05 PST |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| ssh     | addons-210000 ssh cat                                                                       | addons-210000        | jenkins | v1.32.0 | 07 Dec 23 12:05 PST | 07 Dec 23 12:05 PST |
	|         | /opt/local-path-provisioner/pvc-1d29993c-57ff-4743-8c6b-7badd87bfdbd_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-210000 addons disable                                                                | addons-210000        | jenkins | v1.32.0 | 07 Dec 23 12:05 PST |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 12:02:04
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 12:02:04.734932    1902 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:02:04.735099    1902 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:02:04.735102    1902 out.go:309] Setting ErrFile to fd 2...
	I1207 12:02:04.735104    1902 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:02:04.735236    1902 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:02:04.736305    1902 out.go:303] Setting JSON to false
	I1207 12:02:04.752261    1902 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1895,"bootTime":1701977429,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:02:04.752322    1902 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:02:04.756768    1902 out.go:177] * [addons-210000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:02:04.763725    1902 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:02:04.763771    1902 notify.go:220] Checking for updates...
	I1207 12:02:04.770731    1902 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:02:04.773687    1902 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:02:04.776739    1902 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:02:04.779767    1902 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:02:04.782684    1902 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:02:04.785903    1902 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:02:04.790747    1902 out.go:177] * Using the qemu2 driver based on user configuration
	I1207 12:02:04.797762    1902 start.go:298] selected driver: qemu2
	I1207 12:02:04.797770    1902 start.go:902] validating driver "qemu2" against <nil>
	I1207 12:02:04.797779    1902 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:02:04.800103    1902 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 12:02:04.803703    1902 out.go:177] * Automatically selected the socket_vmnet network
	I1207 12:02:04.806784    1902 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 12:02:04.806824    1902 cni.go:84] Creating CNI manager for ""
	I1207 12:02:04.806832    1902 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 12:02:04.806836    1902 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1207 12:02:04.806842    1902 start_flags.go:323] config:
	{Name:addons-210000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-210000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID
:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:02:04.811328    1902 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:02:04.819740    1902 out.go:177] * Starting control plane node addons-210000 in cluster addons-210000
	I1207 12:02:04.823718    1902 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 12:02:04.823733    1902 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1207 12:02:04.823745    1902 cache.go:56] Caching tarball of preloaded images
	I1207 12:02:04.823810    1902 preload.go:174] Found /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 12:02:04.823817    1902 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1207 12:02:04.824061    1902 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/config.json ...
	I1207 12:02:04.824073    1902 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/config.json: {Name:mk2145c4e4cd3d339335c439ff45b6aba24d9ef6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:02:04.824296    1902 start.go:365] acquiring machines lock for addons-210000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:02:04.824421    1902 start.go:369] acquired machines lock for "addons-210000" in 120.041µs
	I1207 12:02:04.824433    1902 start.go:93] Provisioning new machine with config: &{Name:addons-210000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:addons-210000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:02:04.824465    1902 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:02:04.832706    1902 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1207 12:02:05.068578    1902 start.go:159] libmachine.API.Create for "addons-210000" (driver="qemu2")
	I1207 12:02:05.068621    1902 client.go:168] LocalClient.Create starting
	I1207 12:02:05.068779    1902 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:02:05.257969    1902 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:02:05.315319    1902 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:02:05.664426    1902 main.go:141] libmachine: Creating SSH key...
	I1207 12:02:05.722492    1902 main.go:141] libmachine: Creating Disk image...
	I1207 12:02:05.722503    1902 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:02:05.722741    1902 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/disk.qcow2
	I1207 12:02:05.793057    1902 main.go:141] libmachine: STDOUT: 
	I1207 12:02:05.793085    1902 main.go:141] libmachine: STDERR: 
	I1207 12:02:05.793143    1902 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/disk.qcow2 +20000M
	I1207 12:02:05.803844    1902 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:02:05.803859    1902 main.go:141] libmachine: STDERR: 
	I1207 12:02:05.803878    1902 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/disk.qcow2
	I1207 12:02:05.803885    1902 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:02:05.803918    1902 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:b5:83:6f:32:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/disk.qcow2
	I1207 12:02:05.936946    1902 main.go:141] libmachine: STDOUT: 
	I1207 12:02:05.936973    1902 main.go:141] libmachine: STDERR: 
	I1207 12:02:05.936977    1902 main.go:141] libmachine: Attempt 0
	I1207 12:02:05.937009    1902 main.go:141] libmachine: Searching for 12:b5:83:6f:32:61 in /var/db/dhcpd_leases ...
	I1207 12:02:05.937054    1902 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I1207 12:02:05.937076    1902 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x657375c3}
	I1207 12:02:07.939222    1902 main.go:141] libmachine: Attempt 1
	I1207 12:02:07.939293    1902 main.go:141] libmachine: Searching for 12:b5:83:6f:32:61 in /var/db/dhcpd_leases ...
	I1207 12:02:07.939565    1902 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I1207 12:02:07.939616    1902 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x657375c3}
	I1207 12:02:09.941777    1902 main.go:141] libmachine: Attempt 2
	I1207 12:02:09.941844    1902 main.go:141] libmachine: Searching for 12:b5:83:6f:32:61 in /var/db/dhcpd_leases ...
	I1207 12:02:09.942058    1902 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I1207 12:02:09.942108    1902 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x657375c3}
	I1207 12:02:11.944228    1902 main.go:141] libmachine: Attempt 3
	I1207 12:02:11.944258    1902 main.go:141] libmachine: Searching for 12:b5:83:6f:32:61 in /var/db/dhcpd_leases ...
	I1207 12:02:11.944321    1902 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I1207 12:02:11.944335    1902 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x657375c3}
	I1207 12:02:13.946339    1902 main.go:141] libmachine: Attempt 4
	I1207 12:02:13.946350    1902 main.go:141] libmachine: Searching for 12:b5:83:6f:32:61 in /var/db/dhcpd_leases ...
	I1207 12:02:13.946373    1902 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I1207 12:02:13.946379    1902 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x657375c3}
	I1207 12:02:15.948449    1902 main.go:141] libmachine: Attempt 5
	I1207 12:02:15.948489    1902 main.go:141] libmachine: Searching for 12:b5:83:6f:32:61 in /var/db/dhcpd_leases ...
	I1207 12:02:15.948536    1902 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I1207 12:02:15.948548    1902 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x657375c3}
	I1207 12:02:17.949027    1902 main.go:141] libmachine: Attempt 6
	I1207 12:02:17.949053    1902 main.go:141] libmachine: Searching for 12:b5:83:6f:32:61 in /var/db/dhcpd_leases ...
	I1207 12:02:17.949121    1902 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I1207 12:02:17.949132    1902 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x657375c3}
	I1207 12:02:19.951205    1902 main.go:141] libmachine: Attempt 7
	I1207 12:02:19.951235    1902 main.go:141] libmachine: Searching for 12:b5:83:6f:32:61 in /var/db/dhcpd_leases ...
	I1207 12:02:19.951298    1902 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I1207 12:02:19.951312    1902 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:12:b5:83:6f:32:61 ID:1,12:b5:83:6f:32:61 Lease:0x6573764a}
	I1207 12:02:19.951316    1902 main.go:141] libmachine: Found match: 12:b5:83:6f:32:61
	I1207 12:02:19.951323    1902 main.go:141] libmachine: IP: 192.168.105.2
	I1207 12:02:19.951328    1902 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I1207 12:02:20.957774    1902 machine.go:88] provisioning docker machine ...
	I1207 12:02:20.957803    1902 buildroot.go:166] provisioning hostname "addons-210000"
	I1207 12:02:20.958188    1902 main.go:141] libmachine: Using SSH client type: native
	I1207 12:02:20.958448    1902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1003c6a70] 0x1003c91e0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1207 12:02:20.958455    1902 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-210000 && echo "addons-210000" | sudo tee /etc/hostname
	I1207 12:02:21.025167    1902 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-210000
	
	I1207 12:02:21.025225    1902 main.go:141] libmachine: Using SSH client type: native
	I1207 12:02:21.025481    1902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1003c6a70] 0x1003c91e0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1207 12:02:21.025489    1902 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-210000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-210000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-210000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 12:02:21.090203    1902 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 12:02:21.090215    1902 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17719-1328/.minikube CaCertPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17719-1328/.minikube}
	I1207 12:02:21.090223    1902 buildroot.go:174] setting up certificates
	I1207 12:02:21.090229    1902 provision.go:83] configureAuth start
	I1207 12:02:21.090232    1902 provision.go:138] copyHostCerts
	I1207 12:02:21.090342    1902 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17719-1328/.minikube/ca.pem (1078 bytes)
	I1207 12:02:21.090566    1902 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17719-1328/.minikube/cert.pem (1123 bytes)
	I1207 12:02:21.090666    1902 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17719-1328/.minikube/key.pem (1679 bytes)
	I1207 12:02:21.090761    1902 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca-key.pem org=jenkins.addons-210000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-210000]
	I1207 12:02:21.239435    1902 provision.go:172] copyRemoteCerts
	I1207 12:02:21.239495    1902 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 12:02:21.239515    1902 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/id_rsa Username:docker}
	I1207 12:02:21.273662    1902 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1207 12:02:21.280700    1902 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1207 12:02:21.287706    1902 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 12:02:21.294756    1902 provision.go:86] duration metric: configureAuth took 204.528083ms
	I1207 12:02:21.294768    1902 buildroot.go:189] setting minikube options for container-runtime
	I1207 12:02:21.294862    1902 config.go:182] Loaded profile config "addons-210000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:02:21.294894    1902 main.go:141] libmachine: Using SSH client type: native
	I1207 12:02:21.295102    1902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1003c6a70] 0x1003c91e0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1207 12:02:21.295107    1902 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1207 12:02:21.356194    1902 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1207 12:02:21.356203    1902 buildroot.go:70] root file system type: tmpfs
	I1207 12:02:21.356264    1902 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1207 12:02:21.356306    1902 main.go:141] libmachine: Using SSH client type: native
	I1207 12:02:21.356552    1902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1003c6a70] 0x1003c91e0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1207 12:02:21.356594    1902 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1207 12:02:21.423404    1902 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1207 12:02:21.423448    1902 main.go:141] libmachine: Using SSH client type: native
	I1207 12:02:21.423705    1902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1003c6a70] 0x1003c91e0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1207 12:02:21.423714    1902 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1207 12:02:21.775618    1902 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1207 12:02:21.775632    1902 machine.go:91] provisioned docker machine in 817.867708ms
	I1207 12:02:21.775639    1902 client.go:171] LocalClient.Create took 16.70743s
	I1207 12:02:21.775652    1902 start.go:167] duration metric: libmachine.API.Create for "addons-210000" took 16.707496625s
	I1207 12:02:21.775658    1902 start.go:300] post-start starting for "addons-210000" (driver="qemu2")
	I1207 12:02:21.775663    1902 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 12:02:21.775724    1902 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 12:02:21.775733    1902 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/id_rsa Username:docker}
	I1207 12:02:21.808736    1902 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 12:02:21.809954    1902 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 12:02:21.809965    1902 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17719-1328/.minikube/addons for local assets ...
	I1207 12:02:21.810034    1902 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17719-1328/.minikube/files for local assets ...
	I1207 12:02:21.810065    1902 start.go:303] post-start completed in 34.405541ms
	I1207 12:02:21.810409    1902 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/config.json ...
	I1207 12:02:21.810581    1902 start.go:128] duration metric: createHost completed in 16.986535125s
	I1207 12:02:21.810614    1902 main.go:141] libmachine: Using SSH client type: native
	I1207 12:02:21.810826    1902 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1003c6a70] 0x1003c91e0 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I1207 12:02:21.810831    1902 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 12:02:21.872010    1902 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701979341.592797585
	
	I1207 12:02:21.872021    1902 fix.go:206] guest clock: 1701979341.592797585
	I1207 12:02:21.872025    1902 fix.go:219] Guest: 2023-12-07 12:02:21.592797585 -0800 PST Remote: 2023-12-07 12:02:21.810584 -0800 PST m=+17.097660543 (delta=-217.786415ms)
	I1207 12:02:21.872036    1902 fix.go:190] guest clock delta is within tolerance: -217.786415ms
	I1207 12:02:21.872040    1902 start.go:83] releasing machines lock for "addons-210000", held for 17.048038584s
	I1207 12:02:21.872400    1902 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 12:02:21.872402    1902 ssh_runner.go:195] Run: cat /version.json
	I1207 12:02:21.872413    1902 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/id_rsa Username:docker}
	I1207 12:02:21.872429    1902 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/id_rsa Username:docker}
	I1207 12:02:21.907395    1902 ssh_runner.go:195] Run: systemctl --version
	I1207 12:02:21.954792    1902 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 12:02:21.956873    1902 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 12:02:21.956906    1902 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 12:02:21.962473    1902 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 12:02:21.962482    1902 start.go:475] detecting cgroup driver to use...
	I1207 12:02:21.962604    1902 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 12:02:21.968654    1902 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1207 12:02:21.971641    1902 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1207 12:02:21.974726    1902 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1207 12:02:21.974755    1902 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1207 12:02:21.977989    1902 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1207 12:02:21.981031    1902 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1207 12:02:21.983780    1902 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1207 12:02:21.986928    1902 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 12:02:21.990305    1902 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1207 12:02:21.994196    1902 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 12:02:21.996977    1902 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 12:02:21.999532    1902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 12:02:22.061094    1902 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1207 12:02:22.069085    1902 start.go:475] detecting cgroup driver to use...
	I1207 12:02:22.069179    1902 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1207 12:02:22.074446    1902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 12:02:22.078838    1902 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 12:02:22.085480    1902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 12:02:22.089964    1902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1207 12:02:22.094514    1902 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1207 12:02:22.134473    1902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1207 12:02:22.139974    1902 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 12:02:22.145455    1902 ssh_runner.go:195] Run: which cri-dockerd
	I1207 12:02:22.146933    1902 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1207 12:02:22.150320    1902 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1207 12:02:22.155583    1902 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1207 12:02:22.215683    1902 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1207 12:02:22.277812    1902 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1207 12:02:22.277880    1902 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1207 12:02:22.283468    1902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 12:02:22.338609    1902 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1207 12:02:23.495691    1902 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.157096833s)
	I1207 12:02:23.495751    1902 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1207 12:02:23.565365    1902 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1207 12:02:23.628775    1902 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1207 12:02:23.694165    1902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 12:02:23.758860    1902 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1207 12:02:23.767063    1902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 12:02:23.831536    1902 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1207 12:02:23.853877    1902 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1207 12:02:23.853968    1902 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1207 12:02:23.857058    1902 start.go:543] Will wait 60s for crictl version
	I1207 12:02:23.857104    1902 ssh_runner.go:195] Run: which crictl
	I1207 12:02:23.858430    1902 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 12:02:23.881449    1902 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1207 12:02:23.881528    1902 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1207 12:02:23.891319    1902 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1207 12:02:23.908755    1902 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1207 12:02:23.908825    1902 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I1207 12:02:23.910328    1902 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 12:02:23.914509    1902 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 12:02:23.914549    1902 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1207 12:02:23.920106    1902 docker.go:671] Got preloaded images: 
	I1207 12:02:23.920116    1902 docker.go:677] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I1207 12:02:23.920156    1902 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1207 12:02:23.923046    1902 ssh_runner.go:195] Run: which lz4
	I1207 12:02:23.924465    1902 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1207 12:02:23.925694    1902 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1207 12:02:23.925703    1902 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (357941720 bytes)
	I1207 12:02:25.264978    1902 docker.go:635] Took 1.340554 seconds to copy over tarball
	I1207 12:02:25.265037    1902 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1207 12:02:26.351468    1902 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.086446709s)
	I1207 12:02:26.351482    1902 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1207 12:02:26.367299    1902 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1207 12:02:26.370805    1902 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1207 12:02:26.376193    1902 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 12:02:26.439317    1902 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1207 12:02:29.237902    1902 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.798640083s)
	I1207 12:02:29.237999    1902 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1207 12:02:29.244175    1902 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1207 12:02:29.244184    1902 cache_images.go:84] Images are preloaded, skipping loading
	I1207 12:02:29.244241    1902 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1207 12:02:29.252144    1902 cni.go:84] Creating CNI manager for ""
	I1207 12:02:29.252160    1902 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 12:02:29.252186    1902 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 12:02:29.252195    1902 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-210000 NodeName:addons-210000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 12:02:29.252264    1902 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-210000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 12:02:29.252521    1902 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=addons-210000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-210000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 12:02:29.252734    1902 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1207 12:02:29.256203    1902 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 12:02:29.256247    1902 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 12:02:29.258659    1902 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1207 12:02:29.263342    1902 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 12:02:29.268029    1902 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I1207 12:02:29.272777    1902 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I1207 12:02:29.274039    1902 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 12:02:29.277955    1902 certs.go:56] Setting up /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000 for IP: 192.168.105.2
	I1207 12:02:29.277976    1902 certs.go:190] acquiring lock for shared ca certs: {Name:mka2d4ba9e36871ccc0bd079595857e1e300747f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:02:29.278486    1902 certs.go:204] generating minikubeCA CA: /Users/jenkins/minikube-integration/17719-1328/.minikube/ca.key
	I1207 12:02:29.351003    1902 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17719-1328/.minikube/ca.crt ...
	I1207 12:02:29.351008    1902 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/ca.crt: {Name:mk33e0e4fcaa4129062b98a89f404ac13fe83c53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:02:29.351227    1902 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17719-1328/.minikube/ca.key ...
	I1207 12:02:29.351230    1902 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/ca.key: {Name:mk3a85c81c14e6cdba476cfc433ba7efac5a6361 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:02:29.351370    1902 certs.go:204] generating proxyClientCA CA: /Users/jenkins/minikube-integration/17719-1328/.minikube/proxy-client-ca.key
	I1207 12:02:29.528195    1902 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17719-1328/.minikube/proxy-client-ca.crt ...
	I1207 12:02:29.528201    1902 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/proxy-client-ca.crt: {Name:mk848ff91712dd117f9e2daa2abe1a1100bd7f7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:02:29.528483    1902 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17719-1328/.minikube/proxy-client-ca.key ...
	I1207 12:02:29.528488    1902 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/proxy-client-ca.key: {Name:mk82299146028b2f4ef617ba49b7ceb64dcf3097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:02:29.528856    1902 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/client.key
	I1207 12:02:29.528883    1902 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/client.crt with IP's: []
	I1207 12:02:29.734031    1902 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/client.crt ...
	I1207 12:02:29.734037    1902 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/client.crt: {Name:mkcae07dad2ea0d137f56ce4326843ac50866c37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:02:29.734273    1902 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/client.key ...
	I1207 12:02:29.734277    1902 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/client.key: {Name:mkd8ba312d6f2388c8cd04c328b2ab82a1027328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:02:29.734406    1902 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/apiserver.key.96055969
	I1207 12:02:29.734416    1902 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/apiserver.crt.96055969 with IP's: [192.168.105.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1207 12:02:29.775980    1902 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/apiserver.crt.96055969 ...
	I1207 12:02:29.775985    1902 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/apiserver.crt.96055969: {Name:mk99962a9d2698f84c3d5e5a4fd5c077a752f32f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:02:29.776272    1902 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/apiserver.key.96055969 ...
	I1207 12:02:29.776277    1902 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/apiserver.key.96055969: {Name:mk563c4f49a66ba4fec8b28f8c42ff3980f0d8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:02:29.776608    1902 certs.go:337] copying /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/apiserver.crt.96055969 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/apiserver.crt
	I1207 12:02:29.776725    1902 certs.go:341] copying /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/apiserver.key.96055969 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/apiserver.key
	I1207 12:02:29.776811    1902 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/proxy-client.key
	I1207 12:02:29.776822    1902 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/proxy-client.crt with IP's: []
	I1207 12:02:29.820041    1902 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/proxy-client.crt ...
	I1207 12:02:29.820044    1902 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/proxy-client.crt: {Name:mke482a1e51746f6c15b6c06ce3c63e6ce113a13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:02:29.820184    1902 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/proxy-client.key ...
	I1207 12:02:29.820187    1902 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/proxy-client.key: {Name:mkc15ec9815b87ba2fc56fdaca79ff0264a2aa5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:02:29.820430    1902 certs.go:437] found cert: /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 12:02:29.820456    1902 certs.go:437] found cert: /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem (1078 bytes)
	I1207 12:02:29.820479    1902 certs.go:437] found cert: /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem (1123 bytes)
	I1207 12:02:29.820499    1902 certs.go:437] found cert: /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/key.pem (1679 bytes)
	I1207 12:02:29.820883    1902 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 12:02:29.828771    1902 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 12:02:29.835632    1902 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 12:02:29.842505    1902 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 12:02:29.850337    1902 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 12:02:29.857806    1902 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 12:02:29.865007    1902 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 12:02:29.871980    1902 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 12:02:29.878624    1902 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 12:02:29.885907    1902 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 12:02:29.892289    1902 ssh_runner.go:195] Run: openssl version
	I1207 12:02:29.894205    1902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 12:02:29.897849    1902 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 12:02:29.899412    1902 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:02 /usr/share/ca-certificates/minikubeCA.pem
	I1207 12:02:29.899434    1902 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 12:02:29.901219    1902 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 12:02:29.904178    1902 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 12:02:29.905468    1902 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1207 12:02:29.905505    1902 kubeadm.go:404] StartCluster: {Name:addons-210000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:addons-210000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:02:29.905570    1902 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1207 12:02:29.911045    1902 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 12:02:29.914490    1902 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 12:02:29.917780    1902 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 12:02:29.920484    1902 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 12:02:29.920513    1902 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1207 12:02:29.942315    1902 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1207 12:02:29.942341    1902 kubeadm.go:322] [preflight] Running pre-flight checks
	I1207 12:02:30.002944    1902 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 12:02:30.002996    1902 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 12:02:30.003099    1902 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1207 12:02:30.110298    1902 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 12:02:30.120454    1902 out.go:204]   - Generating certificates and keys ...
	I1207 12:02:30.120498    1902 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1207 12:02:30.120530    1902 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1207 12:02:30.381718    1902 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1207 12:02:30.557480    1902 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1207 12:02:30.718347    1902 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1207 12:02:30.763475    1902 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1207 12:02:30.890410    1902 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1207 12:02:30.890472    1902 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-210000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I1207 12:02:31.075016    1902 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1207 12:02:31.075094    1902 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-210000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I1207 12:02:31.227491    1902 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1207 12:02:31.401890    1902 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1207 12:02:31.528696    1902 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1207 12:02:31.528726    1902 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 12:02:31.590834    1902 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 12:02:31.662757    1902 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 12:02:31.772162    1902 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 12:02:31.924828    1902 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 12:02:31.925037    1902 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 12:02:31.926010    1902 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 12:02:31.934420    1902 out.go:204]   - Booting up control plane ...
	I1207 12:02:31.934483    1902 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 12:02:31.934524    1902 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 12:02:31.934554    1902 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 12:02:31.934599    1902 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 12:02:31.934864    1902 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 12:02:31.934910    1902 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1207 12:02:32.007195    1902 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1207 12:02:36.008740    1902 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.001676 seconds
	I1207 12:02:36.008832    1902 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 12:02:36.015240    1902 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 12:02:36.523647    1902 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 12:02:36.523758    1902 kubeadm.go:322] [mark-control-plane] Marking the node addons-210000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 12:02:37.027921    1902 kubeadm.go:322] [bootstrap-token] Using token: ggvc6i.9hyywec4l3c6jblp
	I1207 12:02:37.034237    1902 out.go:204]   - Configuring RBAC rules ...
	I1207 12:02:37.034296    1902 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 12:02:37.035602    1902 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 12:02:37.042392    1902 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 12:02:37.043392    1902 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 12:02:37.046656    1902 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 12:02:37.047933    1902 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 12:02:37.053547    1902 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 12:02:37.193606    1902 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1207 12:02:37.438721    1902 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1207 12:02:37.439248    1902 kubeadm.go:322] 
	I1207 12:02:37.439279    1902 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1207 12:02:37.439282    1902 kubeadm.go:322] 
	I1207 12:02:37.439312    1902 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1207 12:02:37.439315    1902 kubeadm.go:322] 
	I1207 12:02:37.439326    1902 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1207 12:02:37.439353    1902 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 12:02:37.439388    1902 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 12:02:37.439393    1902 kubeadm.go:322] 
	I1207 12:02:37.439426    1902 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1207 12:02:37.439432    1902 kubeadm.go:322] 
	I1207 12:02:37.439457    1902 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 12:02:37.439463    1902 kubeadm.go:322] 
	I1207 12:02:37.439488    1902 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1207 12:02:37.439525    1902 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 12:02:37.439556    1902 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 12:02:37.439558    1902 kubeadm.go:322] 
	I1207 12:02:37.439611    1902 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 12:02:37.439650    1902 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1207 12:02:37.439656    1902 kubeadm.go:322] 
	I1207 12:02:37.439700    1902 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ggvc6i.9hyywec4l3c6jblp \
	I1207 12:02:37.439748    1902 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:828939e74f1d12618d8bb944cf208455a494cd79da1e765a74ad9e48dba341a3 \
	I1207 12:02:37.439759    1902 kubeadm.go:322] 	--control-plane 
	I1207 12:02:37.439785    1902 kubeadm.go:322] 
	I1207 12:02:37.439832    1902 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1207 12:02:37.439836    1902 kubeadm.go:322] 
	I1207 12:02:37.439890    1902 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ggvc6i.9hyywec4l3c6jblp \
	I1207 12:02:37.439935    1902 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:828939e74f1d12618d8bb944cf208455a494cd79da1e765a74ad9e48dba341a3 
	I1207 12:02:37.440305    1902 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 12:02:37.440313    1902 cni.go:84] Creating CNI manager for ""
	I1207 12:02:37.440321    1902 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 12:02:37.447855    1902 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 12:02:37.450913    1902 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 12:02:37.454462    1902 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 12:02:37.459742    1902 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 12:02:37.459798    1902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:02:37.459833    1902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c minikube.k8s.io/name=addons-210000 minikube.k8s.io/updated_at=2023_12_07T12_02_37_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:02:37.515900    1902 ops.go:34] apiserver oom_adj: -16
	I1207 12:02:37.515958    1902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:02:37.552575    1902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:02:38.095728    1902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:02:38.595726    1902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:02:39.095762    1902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:02:39.595760    1902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:02:40.095677    1902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:02:40.595711    1902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:02:41.095690    1902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:02:41.595690    1902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:02:42.095649    1902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:02:42.595708    1902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:02:43.095612    1902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:02:43.595652    1902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:02:44.095568    1902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:02:44.594577    1902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:02:45.095611    1902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:02:45.594412    1902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:02:46.095579    1902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:02:46.595572    1902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:02:47.095545    1902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:02:47.595517    1902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:02:48.095528    1902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:02:48.595500    1902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:02:49.095471    1902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:02:49.593934    1902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:02:50.095420    1902 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:02:50.128644    1902 kubeadm.go:1088] duration metric: took 12.669211375s to wait for elevateKubeSystemPrivileges.
	I1207 12:02:50.128663    1902 kubeadm.go:406] StartCluster complete in 20.22366275s
	I1207 12:02:50.128673    1902 settings.go:142] acquiring lock: {Name:mk64a7588accf4b6bd8e16cdbaa1b2c1768d52b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:02:50.128818    1902 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:02:50.129056    1902 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/kubeconfig: {Name:mk1f9e67cb7d73aba54460262958078aba7f1051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:02:50.129315    1902 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 12:02:50.129390    1902 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1207 12:02:50.129465    1902 addons.go:69] Setting ingress=true in profile "addons-210000"
	I1207 12:02:50.129469    1902 addons.go:69] Setting registry=true in profile "addons-210000"
	I1207 12:02:50.129473    1902 addons.go:231] Setting addon ingress=true in "addons-210000"
	I1207 12:02:50.129476    1902 addons.go:231] Setting addon registry=true in "addons-210000"
	I1207 12:02:50.129480    1902 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-210000"
	I1207 12:02:50.129493    1902 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-210000"
	I1207 12:02:50.129499    1902 host.go:66] Checking if "addons-210000" exists ...
	I1207 12:02:50.129502    1902 addons.go:69] Setting cloud-spanner=true in profile "addons-210000"
	I1207 12:02:50.129507    1902 addons.go:231] Setting addon cloud-spanner=true in "addons-210000"
	I1207 12:02:50.129519    1902 host.go:66] Checking if "addons-210000" exists ...
	I1207 12:02:50.129522    1902 addons.go:69] Setting ingress-dns=true in profile "addons-210000"
	I1207 12:02:50.129528    1902 addons.go:231] Setting addon ingress-dns=true in "addons-210000"
	I1207 12:02:50.129541    1902 addons.go:69] Setting storage-provisioner=true in profile "addons-210000"
	I1207 12:02:50.129554    1902 addons.go:69] Setting default-storageclass=true in profile "addons-210000"
	I1207 12:02:50.129559    1902 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-210000"
	I1207 12:02:50.129573    1902 addons.go:231] Setting addon storage-provisioner=true in "addons-210000"
	I1207 12:02:50.129574    1902 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-210000"
	I1207 12:02:50.129585    1902 addons.go:69] Setting gcp-auth=true in profile "addons-210000"
	I1207 12:02:50.129595    1902 mustload.go:65] Loading cluster: addons-210000
	I1207 12:02:50.129610    1902 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-210000"
	I1207 12:02:50.129499    1902 host.go:66] Checking if "addons-210000" exists ...
	I1207 12:02:50.129644    1902 host.go:66] Checking if "addons-210000" exists ...
	I1207 12:02:50.129666    1902 host.go:66] Checking if "addons-210000" exists ...
	I1207 12:02:50.129519    1902 host.go:66] Checking if "addons-210000" exists ...
	I1207 12:02:50.129764    1902 config.go:182] Loaded profile config "addons-210000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:02:50.129921    1902 config.go:182] Loaded profile config "addons-210000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:02:50.129925    1902 retry.go:31] will retry after 1.451418077s: connect: dial unix /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/monitor: connect: connection refused
	I1207 12:02:50.129932    1902 retry.go:31] will retry after 1.151069504s: connect: dial unix /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/monitor: connect: connection refused
	I1207 12:02:50.129935    1902 retry.go:31] will retry after 539.673586ms: connect: dial unix /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/monitor: connect: connection refused
	I1207 12:02:50.129465    1902 addons.go:69] Setting volumesnapshots=true in profile "addons-210000"
	I1207 12:02:50.129941    1902 addons.go:231] Setting addon volumesnapshots=true in "addons-210000"
	I1207 12:02:50.129950    1902 host.go:66] Checking if "addons-210000" exists ...
	I1207 12:02:50.129548    1902 host.go:66] Checking if "addons-210000" exists ...
	I1207 12:02:50.130109    1902 retry.go:31] will retry after 1.306235963s: connect: dial unix /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/monitor: connect: connection refused
	I1207 12:02:50.129552    1902 addons.go:69] Setting metrics-server=true in profile "addons-210000"
	I1207 12:02:50.130148    1902 retry.go:31] will retry after 707.598357ms: connect: dial unix /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/monitor: connect: connection refused
	I1207 12:02:50.129573    1902 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-210000"
	I1207 12:02:50.130156    1902 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-210000"
	I1207 12:02:50.130140    1902 addons.go:231] Setting addon metrics-server=true in "addons-210000"
	I1207 12:02:50.129550    1902 addons.go:69] Setting inspektor-gadget=true in profile "addons-210000"
	I1207 12:02:50.130203    1902 retry.go:31] will retry after 682.003648ms: connect: dial unix /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/monitor: connect: connection refused
	I1207 12:02:50.130198    1902 addons.go:231] Setting addon inspektor-gadget=true in "addons-210000"
	I1207 12:02:50.130222    1902 host.go:66] Checking if "addons-210000" exists ...
	I1207 12:02:50.130147    1902 retry.go:31] will retry after 1.151208003s: connect: dial unix /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/monitor: connect: connection refused
	I1207 12:02:50.130269    1902 retry.go:31] will retry after 1.039218324s: connect: dial unix /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/monitor: connect: connection refused
	I1207 12:02:50.130278    1902 host.go:66] Checking if "addons-210000" exists ...
	I1207 12:02:50.130342    1902 retry.go:31] will retry after 872.383102ms: connect: dial unix /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/monitor: connect: connection refused
	I1207 12:02:50.130418    1902 retry.go:31] will retry after 1.352278174s: connect: dial unix /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/monitor: connect: connection refused
	I1207 12:02:50.130642    1902 retry.go:31] will retry after 964.564426ms: connect: dial unix /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/monitor: connect: connection refused
	I1207 12:02:50.131210    1902 addons.go:231] Setting addon default-storageclass=true in "addons-210000"
	I1207 12:02:50.131219    1902 host.go:66] Checking if "addons-210000" exists ...
	I1207 12:02:50.135648    1902 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 12:02:50.132028    1902 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 12:02:50.141851    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 12:02:50.141862    1902 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/id_rsa Username:docker}
	I1207 12:02:50.141900    1902 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 12:02:50.141905    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 12:02:50.141909    1902 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/id_rsa Username:docker}
	I1207 12:02:50.144524    1902 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-210000" context rescaled to 1 replicas
	I1207 12:02:50.144539    1902 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:02:50.156800    1902 out.go:177] * Verifying Kubernetes components...
	I1207 12:02:50.162872    1902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 12:02:50.186840    1902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 12:02:50.186871    1902 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 12:02:50.187318    1902 node_ready.go:35] waiting up to 6m0s for node "addons-210000" to be "Ready" ...
	I1207 12:02:50.188856    1902 node_ready.go:49] node "addons-210000" has status "Ready":"True"
	I1207 12:02:50.188873    1902 node_ready.go:38] duration metric: took 1.535875ms waiting for node "addons-210000" to be "Ready" ...
	I1207 12:02:50.188877    1902 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 12:02:50.191797    1902 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-210000" in "kube-system" namespace to be "Ready" ...
	I1207 12:02:50.194958    1902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 12:02:50.196332    1902 pod_ready.go:92] pod "etcd-addons-210000" in "kube-system" namespace has status "Ready":"True"
	I1207 12:02:50.196338    1902 pod_ready.go:81] duration metric: took 4.532917ms waiting for pod "etcd-addons-210000" in "kube-system" namespace to be "Ready" ...
	I1207 12:02:50.196343    1902 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-210000" in "kube-system" namespace to be "Ready" ...
	I1207 12:02:50.200034    1902 pod_ready.go:92] pod "kube-apiserver-addons-210000" in "kube-system" namespace has status "Ready":"True"
	I1207 12:02:50.200043    1902 pod_ready.go:81] duration metric: took 3.696666ms waiting for pod "kube-apiserver-addons-210000" in "kube-system" namespace to be "Ready" ...
	I1207 12:02:50.200048    1902 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-210000" in "kube-system" namespace to be "Ready" ...
	I1207 12:02:50.203029    1902 pod_ready.go:92] pod "kube-controller-manager-addons-210000" in "kube-system" namespace has status "Ready":"True"
	I1207 12:02:50.203036    1902 pod_ready.go:81] duration metric: took 2.984417ms waiting for pod "kube-controller-manager-addons-210000" in "kube-system" namespace to be "Ready" ...
	I1207 12:02:50.203040    1902 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-210000" in "kube-system" namespace to be "Ready" ...
	I1207 12:02:50.208012    1902 pod_ready.go:92] pod "kube-scheduler-addons-210000" in "kube-system" namespace has status "Ready":"True"
	I1207 12:02:50.208021    1902 pod_ready.go:81] duration metric: took 4.977916ms waiting for pod "kube-scheduler-addons-210000" in "kube-system" namespace to be "Ready" ...
	I1207 12:02:50.208025    1902 pod_ready.go:38] duration metric: took 19.142625ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 12:02:50.208034    1902 api_server.go:52] waiting for apiserver process to appear ...
	I1207 12:02:50.208084    1902 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 12:02:50.709901    1902 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1207 12:02:50.713840    1902 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1207 12:02:50.713850    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1207 12:02:50.713861    1902 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/id_rsa Username:docker}
	I1207 12:02:50.717544    1902 start.go:929] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I1207 12:02:50.774682    1902 api_server.go:72] duration metric: took 630.144416ms to wait for apiserver process to appear ...
	I1207 12:02:50.774693    1902 api_server.go:88] waiting for apiserver healthz status ...
	I1207 12:02:50.774700    1902 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I1207 12:02:50.778883    1902 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I1207 12:02:50.780723    1902 api_server.go:141] control plane version: v1.28.4
	I1207 12:02:50.780730    1902 api_server.go:131] duration metric: took 6.034875ms to wait for apiserver health ...
	I1207 12:02:50.780734    1902 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 12:02:50.786034    1902 system_pods.go:59] 8 kube-system pods found
	I1207 12:02:50.786052    1902 system_pods.go:61] "coredns-5dd5756b68-8mdt2" [a37170df-c3c1-47bb-b603-dadc9972e3d0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 12:02:50.786056    1902 system_pods.go:61] "coredns-5dd5756b68-v7vzh" [6596d95d-f6a5-40c5-9a71-6658d07f0be8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 12:02:50.786059    1902 system_pods.go:61] "etcd-addons-210000" [4b1c23b3-d3e9-4032-adc0-41756e333d0c] Running
	I1207 12:02:50.786062    1902 system_pods.go:61] "kube-apiserver-addons-210000" [f3d6882d-24b5-4c8c-b23e-f08f52affae3] Running
	I1207 12:02:50.786064    1902 system_pods.go:61] "kube-controller-manager-addons-210000" [6ecd0e29-981f-41ee-8b89-3fbf40c8b452] Running
	I1207 12:02:50.786068    1902 system_pods.go:61] "kube-proxy-slqxw" [38105871-5a4a-49f1-b52f-0cf382a70540] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1207 12:02:50.786070    1902 system_pods.go:61] "kube-scheduler-addons-210000" [0a7354ce-b0ad-4b62-bc55-493a365f9338] Running
	I1207 12:02:50.786073    1902 system_pods.go:61] "storage-provisioner" [c62cfd54-832d-4da1-acee-81e8f0cc47d3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 12:02:50.786076    1902 system_pods.go:74] duration metric: took 5.338583ms to wait for pod list to return data ...
	I1207 12:02:50.786082    1902 default_sa.go:34] waiting for default service account to be created ...
	I1207 12:02:50.787559    1902 default_sa.go:45] found service account: "default"
	I1207 12:02:50.787567    1902 default_sa.go:55] duration metric: took 1.48175ms for default service account to be created ...
	I1207 12:02:50.787572    1902 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 12:02:50.790507    1902 system_pods.go:86] 8 kube-system pods found
	I1207 12:02:50.790518    1902 system_pods.go:89] "coredns-5dd5756b68-8mdt2" [a37170df-c3c1-47bb-b603-dadc9972e3d0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 12:02:50.790522    1902 system_pods.go:89] "coredns-5dd5756b68-v7vzh" [6596d95d-f6a5-40c5-9a71-6658d07f0be8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 12:02:50.790526    1902 system_pods.go:89] "etcd-addons-210000" [4b1c23b3-d3e9-4032-adc0-41756e333d0c] Running
	I1207 12:02:50.790529    1902 system_pods.go:89] "kube-apiserver-addons-210000" [f3d6882d-24b5-4c8c-b23e-f08f52affae3] Running
	I1207 12:02:50.790532    1902 system_pods.go:89] "kube-controller-manager-addons-210000" [6ecd0e29-981f-41ee-8b89-3fbf40c8b452] Running
	I1207 12:02:50.790536    1902 system_pods.go:89] "kube-proxy-slqxw" [38105871-5a4a-49f1-b52f-0cf382a70540] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1207 12:02:50.790539    1902 system_pods.go:89] "kube-scheduler-addons-210000" [0a7354ce-b0ad-4b62-bc55-493a365f9338] Running
	I1207 12:02:50.790542    1902 system_pods.go:89] "storage-provisioner" [c62cfd54-832d-4da1-acee-81e8f0cc47d3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 12:02:50.790550    1902 retry.go:31] will retry after 262.725076ms: missing components: kube-dns, kube-proxy
	I1207 12:02:50.791297    1902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1207 12:02:50.818935    1902 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1207 12:02:50.822032    1902 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1207 12:02:50.822039    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1207 12:02:50.822048    1902 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/id_rsa Username:docker}
	I1207 12:02:50.842969    1902 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1207 12:02:50.844256    1902 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1207 12:02:50.844263    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1207 12:02:50.844271    1902 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/id_rsa Username:docker}
	I1207 12:02:50.937512    1902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1207 12:02:50.939437    1902 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1207 12:02:50.939444    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1207 12:02:50.974122    1902 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1207 12:02:50.974135    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1207 12:02:51.004194    1902 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1207 12:02:51.004207    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1207 12:02:51.009004    1902 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1207 12:02:51.012996    1902 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1207 12:02:51.013007    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1207 12:02:51.013017    1902 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/id_rsa Username:docker}
	I1207 12:02:51.023051    1902 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1207 12:02:51.023063    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1207 12:02:51.064830    1902 system_pods.go:86] 8 kube-system pods found
	I1207 12:02:51.064843    1902 system_pods.go:89] "coredns-5dd5756b68-8mdt2" [a37170df-c3c1-47bb-b603-dadc9972e3d0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 12:02:51.064847    1902 system_pods.go:89] "coredns-5dd5756b68-v7vzh" [6596d95d-f6a5-40c5-9a71-6658d07f0be8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 12:02:51.064851    1902 system_pods.go:89] "etcd-addons-210000" [4b1c23b3-d3e9-4032-adc0-41756e333d0c] Running
	I1207 12:02:51.064854    1902 system_pods.go:89] "kube-apiserver-addons-210000" [f3d6882d-24b5-4c8c-b23e-f08f52affae3] Running
	I1207 12:02:51.064856    1902 system_pods.go:89] "kube-controller-manager-addons-210000" [6ecd0e29-981f-41ee-8b89-3fbf40c8b452] Running
	I1207 12:02:51.064860    1902 system_pods.go:89] "kube-proxy-slqxw" [38105871-5a4a-49f1-b52f-0cf382a70540] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1207 12:02:51.064862    1902 system_pods.go:89] "kube-scheduler-addons-210000" [0a7354ce-b0ad-4b62-bc55-493a365f9338] Running
	I1207 12:02:51.064865    1902 system_pods.go:89] "storage-provisioner" [c62cfd54-832d-4da1-acee-81e8f0cc47d3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 12:02:51.064876    1902 retry.go:31] will retry after 390.233373ms: missing components: kube-dns, kube-proxy
	I1207 12:02:51.100882    1902 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.23.1
	I1207 12:02:51.104024    1902 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1207 12:02:51.104035    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1207 12:02:51.104047    1902 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/id_rsa Username:docker}
	I1207 12:02:51.105271    1902 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1207 12:02:51.105278    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1207 12:02:51.154661    1902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1207 12:02:51.161299    1902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1207 12:02:51.171764    1902 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-210000"
	I1207 12:02:51.171785    1902 host.go:66] Checking if "addons-210000" exists ...
	I1207 12:02:51.176019    1902 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1207 12:02:51.178920    1902 out.go:177]   - Using image docker.io/busybox:stable
	I1207 12:02:51.182949    1902 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1207 12:02:51.182957    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1207 12:02:51.182967    1902 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/id_rsa Username:docker}
	I1207 12:02:51.236049    1902 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1207 12:02:51.236062    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1207 12:02:51.246159    1902 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1207 12:02:51.246171    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1207 12:02:51.279524    1902 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1207 12:02:51.279537    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1207 12:02:51.281734    1902 host.go:66] Checking if "addons-210000" exists ...
	I1207 12:02:51.285894    1902 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1207 12:02:51.289944    1902 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1207 12:02:51.293968    1902 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1207 12:02:51.297944    1902 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1207 12:02:51.298223    1902 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1207 12:02:51.307795    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1207 12:02:51.307812    1902 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1207 12:02:51.317889    1902 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1207 12:02:51.326904    1902 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1207 12:02:51.334981    1902 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1207 12:02:51.338876    1902 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1207 12:02:51.338888    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1207 12:02:51.338898    1902 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/id_rsa Username:docker}
	I1207 12:02:51.353095    1902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1207 12:02:51.394046    1902 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1207 12:02:51.394057    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1207 12:02:51.440923    1902 out.go:177]   - Using image docker.io/registry:2.8.3
	I1207 12:02:51.448889    1902 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1207 12:02:51.451979    1902 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1207 12:02:51.451988    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1207 12:02:51.451999    1902 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/id_rsa Username:docker}
	I1207 12:02:51.459169    1902 system_pods.go:86] 9 kube-system pods found
	I1207 12:02:51.459182    1902 system_pods.go:89] "coredns-5dd5756b68-8mdt2" [a37170df-c3c1-47bb-b603-dadc9972e3d0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 12:02:51.459187    1902 system_pods.go:89] "coredns-5dd5756b68-v7vzh" [6596d95d-f6a5-40c5-9a71-6658d07f0be8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 12:02:51.459193    1902 system_pods.go:89] "etcd-addons-210000" [4b1c23b3-d3e9-4032-adc0-41756e333d0c] Running
	I1207 12:02:51.459196    1902 system_pods.go:89] "kube-apiserver-addons-210000" [f3d6882d-24b5-4c8c-b23e-f08f52affae3] Running
	I1207 12:02:51.459198    1902 system_pods.go:89] "kube-controller-manager-addons-210000" [6ecd0e29-981f-41ee-8b89-3fbf40c8b452] Running
	I1207 12:02:51.459201    1902 system_pods.go:89] "kube-proxy-slqxw" [38105871-5a4a-49f1-b52f-0cf382a70540] Running
	I1207 12:02:51.459204    1902 system_pods.go:89] "kube-scheduler-addons-210000" [0a7354ce-b0ad-4b62-bc55-493a365f9338] Running
	I1207 12:02:51.459207    1902 system_pods.go:89] "nvidia-device-plugin-daemonset-6slck" [e708bec9-c211-46f4-9b20-f52f10b9b736] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1207 12:02:51.459211    1902 system_pods.go:89] "storage-provisioner" [c62cfd54-832d-4da1-acee-81e8f0cc47d3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 12:02:51.459216    1902 system_pods.go:126] duration metric: took 671.657875ms to wait for k8s-apps to be running ...
	I1207 12:02:51.459221    1902 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 12:02:51.459263    1902 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 12:02:51.489948    1902 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1207 12:02:51.493790    1902 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1207 12:02:51.493801    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1207 12:02:51.493812    1902 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/id_rsa Username:docker}
	I1207 12:02:51.532300    1902 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1207 12:02:51.532313    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1207 12:02:51.544494    1902 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1207 12:02:51.544505    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1207 12:02:51.586881    1902 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1207 12:02:51.590943    1902 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1207 12:02:51.594819    1902 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1207 12:02:51.599030    1902 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1207 12:02:51.599041    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1207 12:02:51.599058    1902 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/id_rsa Username:docker}
	I1207 12:02:51.646802    1902 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1207 12:02:51.646814    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I1207 12:02:51.649306    1902 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1207 12:02:51.649312    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1207 12:02:51.650617    1902 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1207 12:02:51.650623    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1207 12:02:51.672959    1902 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1207 12:02:51.672968    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1207 12:02:51.675859    1902 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1207 12:02:51.675865    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1207 12:02:51.693862    1902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1207 12:02:51.698583    1902 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1207 12:02:51.698590    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1207 12:02:51.725010    1902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1207 12:02:51.747942    1902 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1207 12:02:51.747955    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1207 12:02:51.788744    1902 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1207 12:02:51.788755    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1207 12:02:51.815238    1902 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1207 12:02:51.815250    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1207 12:02:51.828775    1902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1207 12:02:51.842673    1902 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1207 12:02:51.842684    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1207 12:02:51.879602    1902 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 12:02:51.879614    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1207 12:02:51.907167    1902 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1207 12:02:51.907179    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1207 12:02:51.953668    1902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1207 12:02:52.000628    1902 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1207 12:02:52.000637    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1207 12:02:52.021278    1902 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1207 12:02:52.021290    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1207 12:02:52.086167    1902 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1207 12:02:52.086179    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1207 12:02:52.156988    1902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1207 12:02:52.260418    1902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.105764833s)
	I1207 12:02:52.260435    1902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.09915025s)
	W1207 12:02:52.260437    1902 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1207 12:02:52.260450    1902 retry.go:31] will retry after 259.195132ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1207 12:02:52.521796    1902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1207 12:02:52.775485    1902 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.316241834s)
	I1207 12:02:52.775503    1902 system_svc.go:56] duration metric: took 1.316312625s WaitForService to wait for kubelet.
	I1207 12:02:52.775508    1902 kubeadm.go:581] duration metric: took 2.631024125s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 12:02:52.775518    1902 node_conditions.go:102] verifying NodePressure condition ...
	I1207 12:02:52.775626    1902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.422550458s)
	I1207 12:02:52.777901    1902 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I1207 12:02:52.777930    1902 node_conditions.go:123] node cpu capacity is 2
	I1207 12:02:52.777935    1902 node_conditions.go:105] duration metric: took 2.411583ms to run NodePressure ...
	I1207 12:02:52.777941    1902 start.go:228] waiting for startup goroutines ...
	I1207 12:02:52.964395    1902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.270542334s)
	I1207 12:02:52.964413    1902 addons.go:467] Verifying addon registry=true in "addons-210000"
	I1207 12:02:52.967636    1902 out.go:177] * Verifying registry addon...
	I1207 12:02:52.975113    1902 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1207 12:02:52.979245    1902 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1207 12:02:52.979254    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:02:52.992005    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:02:53.285496    1902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (1.560502125s)
	I1207 12:02:53.503094    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:02:53.996280    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:02:54.416956    1902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (2.588230625s)
	I1207 12:02:54.416973    1902 addons.go:467] Verifying addon ingress=true in "addons-210000"
	I1207 12:02:54.416984    1902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.463364s)
	I1207 12:02:54.416990    1902 addons.go:467] Verifying addon metrics-server=true in "addons-210000"
	I1207 12:02:54.423528    1902 out.go:177] * Verifying ingress addon...
	I1207 12:02:54.430873    1902 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1207 12:02:54.432978    1902 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1207 12:02:54.432985    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:02:54.435848    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:02:54.494007    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:02:54.662574    1902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.505622625s)
	I1207 12:02:54.662595    1902 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-210000"
	I1207 12:02:54.662613    1902 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.140850417s)
	I1207 12:02:54.665508    1902 out.go:177] * Verifying csi-hostpath-driver addon...
	I1207 12:02:54.669016    1902 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1207 12:02:54.672169    1902 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1207 12:02:54.672177    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:02:54.679215    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:02:54.943365    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:02:54.996658    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:02:55.183813    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:02:55.439013    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:02:55.495509    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:02:55.682089    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:02:55.939884    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:02:55.996327    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:02:56.312368    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:02:56.440143    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:02:56.495167    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:02:56.682042    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:02:56.941157    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:02:56.996235    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:02:57.184401    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:02:57.439850    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:02:57.495585    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:02:57.683983    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:02:57.940292    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:02:57.996209    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:02:58.087642    1902 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1207 12:02:58.087659    1902 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/id_rsa Username:docker}
	I1207 12:02:58.122595    1902 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1207 12:02:58.127785    1902 addons.go:231] Setting addon gcp-auth=true in "addons-210000"
	I1207 12:02:58.127806    1902 host.go:66] Checking if "addons-210000" exists ...
	I1207 12:02:58.128662    1902 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1207 12:02:58.128669    1902 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/addons-210000/id_rsa Username:docker}
	I1207 12:02:58.163292    1902 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1207 12:02:58.167289    1902 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1207 12:02:58.170227    1902 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1207 12:02:58.170232    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1207 12:02:58.175282    1902 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1207 12:02:58.175288    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1207 12:02:58.180268    1902 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1207 12:02:58.180275    1902 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1207 12:02:58.182642    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:02:58.185741    1902 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1207 12:02:58.443983    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:02:58.496510    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:02:58.690171    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:02:58.708135    1902 addons.go:467] Verifying addon gcp-auth=true in "addons-210000"
	I1207 12:02:58.712324    1902 out.go:177] * Verifying gcp-auth addon...
	I1207 12:02:58.719621    1902 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1207 12:02:58.723649    1902 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1207 12:02:58.723658    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:02:58.727151    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:02:58.940114    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:02:58.995379    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:02:59.183991    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:02:59.230765    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:02:59.440369    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:02:59.496656    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:02:59.683831    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:02:59.730529    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:02:59.940122    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:02:59.996030    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:00.183873    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:00.230598    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:00.440155    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:00.496134    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:00.683988    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:00.730535    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:00.939376    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:00.997435    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:01.183829    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:01.230963    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:01.439919    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:01.495542    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:01.684103    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:01.731692    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:01.940032    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:01.996326    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:02.184201    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:02.230976    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:02.440080    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:02.496411    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:02.683888    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:02.731007    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:02.940144    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:02.996094    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:03.184034    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:03.231047    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:03.439983    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:03.497207    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:03.682587    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:03.730990    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:03.940416    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:03.996062    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:04.184035    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:04.230961    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:04.440200    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:04.496413    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:04.683966    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:04.729932    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:05.051379    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:05.051436    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:05.185450    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:05.230931    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:05.439782    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:05.495844    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:05.683969    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:05.728938    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:05.939771    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:05.995793    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:06.183756    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:06.230500    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:06.440044    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:06.497745    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:06.683981    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:06.730668    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:06.939756    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:06.995837    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:07.182559    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:07.230493    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:07.440138    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:07.495709    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:07.683818    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:07.730548    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:07.940459    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:08.032286    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:08.183534    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:08.230253    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:08.440130    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:08.495484    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:08.685137    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:08.730383    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:08.940122    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:08.995487    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:09.183502    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:09.230371    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:09.441032    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:09.495803    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:09.683630    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:09.730563    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:09.939521    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:09.995606    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:10.183560    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:10.230310    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:10.437928    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:10.495515    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:10.683506    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:10.728420    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:10.939869    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:10.997686    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:11.183422    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:11.230650    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:11.439450    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:11.495730    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:11.683558    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:11.730344    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:11.938426    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:11.995868    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:12.183540    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:12.229444    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:12.439617    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:12.495308    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:12.683287    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:12.730442    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:12.939863    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:12.995360    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:13.183481    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:13.228646    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:13.439615    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:13.495445    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:13.683535    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:13.730603    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:13.939792    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:13.995930    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:14.183621    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:14.230620    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:14.439765    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:14.493930    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:14.682914    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:14.730689    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:14.939769    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:14.995574    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:15.183429    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:15.230614    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:15.439644    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:15.495713    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:15.683487    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:15.730399    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:15.945637    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:15.995923    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:16.183482    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:16.230318    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:16.439409    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:16.495270    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1207 12:03:16.683209    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:16.728254    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:16.939727    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:16.995740    1902 kapi.go:107] duration metric: took 24.021226792s to wait for kubernetes.io/minikube-addons=registry ...
	I1207 12:03:17.183387    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:17.230555    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:17.439601    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:17.683517    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:17.730477    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:17.939608    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:18.183432    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:18.231101    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:18.439538    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:18.683892    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:18.730365    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:18.939030    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:19.184067    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:19.230502    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:19.439456    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:19.683474    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:19.730896    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:19.940679    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:20.183502    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:20.230397    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:20.438481    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:20.683368    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:20.730463    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:20.939753    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:21.183327    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:21.230057    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:21.439592    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:21.683446    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:21.730126    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:21.939412    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:22.183287    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:22.230071    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:22.439864    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:22.683614    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:22.730180    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:22.939657    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:23.182870    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:23.230166    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:23.439812    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:23.682099    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:23.730007    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:23.939120    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:24.183120    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:24.228416    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:24.439264    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:24.683484    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:24.730758    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:24.939412    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:25.183212    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:25.230258    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:25.439667    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:25.683344    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:25.730323    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:25.939476    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:26.183167    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:26.229906    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:26.439501    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:26.683284    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:26.734421    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:26.939468    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:27.183281    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:27.230707    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:27.439698    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:27.683815    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:27.730255    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:27.939384    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:28.183245    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:28.230119    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:28.439221    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:28.685275    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:28.729905    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:28.939469    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:29.183476    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:29.230174    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:29.439178    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:29.683292    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:29.730381    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:29.939320    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:30.183063    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:30.230128    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:30.439076    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:30.683335    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:30.730362    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:30.939177    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:31.183042    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:31.229847    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:31.439194    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:31.683231    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:31.730051    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:31.939321    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:32.183314    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:32.229780    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:32.439399    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:32.683037    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:32.730038    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:32.939466    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:33.183572    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:33.229802    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:33.439236    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:33.683330    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:33.729803    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:33.939473    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:34.183021    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:34.229814    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:34.439397    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:34.684484    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:34.794513    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:34.939168    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:35.182993    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:35.229682    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:35.439104    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:35.682190    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:35.730031    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:35.939190    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:36.182969    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:36.229582    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:36.438839    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:36.682855    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:36.729812    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:36.939346    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:37.182982    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:37.230131    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:37.439116    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:37.682972    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:37.729956    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:37.939097    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:38.182921    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:38.230020    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:38.438969    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:38.683014    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:38.729650    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:38.938976    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:39.183991    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:39.229836    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:39.438261    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:39.682860    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:39.729539    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:39.939029    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:40.182887    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:40.229535    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:40.439265    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:40.683214    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:40.729584    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:40.938985    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:41.183656    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:41.229792    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:41.438977    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:41.682834    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:41.728196    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:41.939025    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:42.183147    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:42.229421    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:42.438728    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:42.682778    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:42.729858    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:42.938894    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:43.182674    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:43.229504    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:43.438626    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:43.681050    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:43.729755    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:43.938998    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:44.182547    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:44.229756    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:44.438981    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:44.682662    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:44.728673    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:44.938964    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:45.182996    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:45.229279    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:45.439378    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:45.682963    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:45.729557    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:45.940107    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:46.182628    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:46.229679    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:46.438954    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:46.684312    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:46.729735    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:46.938333    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:47.182710    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:47.229991    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:47.438913    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:47.683454    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:47.729496    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:47.938706    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:48.182870    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:48.229360    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:48.438671    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:48.682335    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:48.729604    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:48.938546    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:49.182598    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:49.229619    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:49.438647    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:49.683601    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:49.729584    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:49.938853    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:50.182690    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:50.229697    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:50.438517    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:50.682625    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1207 12:03:50.730182    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:50.939080    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:51.182729    1902 kapi.go:107] duration metric: took 56.51512125s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1207 12:03:51.229240    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:51.438615    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:51.728598    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:51.938421    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:52.229346    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:52.436753    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:52.729904    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:52.938697    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:53.229448    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:53.438397    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:53.729775    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:53.938398    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:54.229645    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:54.438492    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:54.729541    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:54.938660    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:55.229299    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:55.437658    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:55.729911    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:55.938432    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:56.229203    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:56.439059    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:56.729293    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:56.938556    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:57.230478    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:57.438694    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:57.730096    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:57.937735    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:58.229291    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:58.438443    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:58.729624    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:58.938633    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:59.229458    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:59.438337    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:03:59.730239    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:03:59.939049    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:04:00.229467    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:04:00.438969    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:04:00.729544    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:04:00.938651    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:04:01.229814    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:04:01.436681    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:04:01.729524    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:04:01.938356    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:04:02.229520    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:04:02.439069    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:04:02.729416    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:04:02.938678    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:04:03.229456    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:04:03.439234    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:04:03.729341    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:04:03.938506    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:04:04.229484    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:04:04.438898    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:04:04.729181    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:04:04.938673    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:04:05.229019    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:04:05.585608    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:04:05.729539    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:04:05.938271    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:04:06.229522    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:04:06.438808    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:04:06.729339    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:04:06.938586    1902 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1207 12:04:07.229288    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:04:07.438156    1902 kapi.go:107] duration metric: took 1m13.009102125s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1207 12:04:07.727745    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:04:08.229148    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:04:08.729469    1902 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1207 12:04:09.229284    1902 kapi.go:107] duration metric: took 1m10.511416834s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1207 12:04:09.234605    1902 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-210000 cluster.
	I1207 12:04:09.238478    1902 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1207 12:04:09.241461    1902 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1207 12:04:09.247478    1902 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, cloud-spanner, nvidia-device-plugin, ingress-dns, storage-provisioner-rancher, inspektor-gadget, metrics-server, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1207 12:04:09.251455    1902 addons.go:502] enable addons completed in 1m19.124046625s: enabled=[default-storageclass storage-provisioner cloud-spanner nvidia-device-plugin ingress-dns storage-provisioner-rancher inspektor-gadget metrics-server volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1207 12:04:09.251468    1902 start.go:233] waiting for cluster config update ...
	I1207 12:04:09.251476    1902 start.go:242] writing updated cluster config ...
	I1207 12:04:09.251850    1902 ssh_runner.go:195] Run: rm -f paused
	I1207 12:04:09.379535    1902 start.go:600] kubectl: 1.28.2, cluster: 1.28.4 (minor skew: 0)
	I1207 12:04:09.383563    1902 out.go:177] * Done! kubectl is now configured to use "addons-210000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-12-07 20:02:18 UTC, ends at Thu 2023-12-07 20:05:14 UTC. --
	Dec 07 20:05:10 addons-210000 dockerd[1111]: time="2023-12-07T20:05:10.120787992Z" level=warning msg="cleaning up after shim disconnected" id=b8f4494a6e169e8c375cdc23da0beb0fd7431540fa77221e68552dc5a1c2f49a namespace=moby
	Dec 07 20:05:10 addons-210000 dockerd[1111]: time="2023-12-07T20:05:10.120792367Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 07 20:05:10 addons-210000 dockerd[1105]: time="2023-12-07T20:05:10.121025453Z" level=info msg="ignoring event" container=b8f4494a6e169e8c375cdc23da0beb0fd7431540fa77221e68552dc5a1c2f49a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:05:10 addons-210000 dockerd[1105]: time="2023-12-07T20:05:10.193141967Z" level=info msg="ignoring event" container=fb530770b020b6a025d2d26c442fca509ed56003c243e00b3edb997f470176f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:05:10 addons-210000 dockerd[1111]: time="2023-12-07T20:05:10.193243594Z" level=info msg="shim disconnected" id=fb530770b020b6a025d2d26c442fca509ed56003c243e00b3edb997f470176f2 namespace=moby
	Dec 07 20:05:10 addons-210000 dockerd[1111]: time="2023-12-07T20:05:10.193289386Z" level=warning msg="cleaning up after shim disconnected" id=fb530770b020b6a025d2d26c442fca509ed56003c243e00b3edb997f470176f2 namespace=moby
	Dec 07 20:05:10 addons-210000 dockerd[1111]: time="2023-12-07T20:05:10.193293719Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 07 20:05:11 addons-210000 dockerd[1111]: time="2023-12-07T20:05:11.340731252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 07 20:05:11 addons-210000 dockerd[1111]: time="2023-12-07T20:05:11.340762169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 07 20:05:11 addons-210000 dockerd[1111]: time="2023-12-07T20:05:11.340772586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 07 20:05:11 addons-210000 dockerd[1111]: time="2023-12-07T20:05:11.340778878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 07 20:05:11 addons-210000 cri-dockerd[999]: time="2023-12-07T20:05:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3ad63463da534969acd7952ce48c18e4e6fb84731573cda0928290323f64611e/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Dec 07 20:05:11 addons-210000 dockerd[1111]: time="2023-12-07T20:05:11.499198940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 07 20:05:11 addons-210000 dockerd[1111]: time="2023-12-07T20:05:11.499252191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 07 20:05:11 addons-210000 dockerd[1111]: time="2023-12-07T20:05:11.499269941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 07 20:05:11 addons-210000 dockerd[1111]: time="2023-12-07T20:05:11.499280692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 07 20:05:11 addons-210000 dockerd[1105]: time="2023-12-07T20:05:11.542467534Z" level=info msg="ignoring event" container=8a9ad47a050636503ac0addabdbd3c2ece63bc3df067fb31820922fabb18a943 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:05:11 addons-210000 dockerd[1111]: time="2023-12-07T20:05:11.542699163Z" level=info msg="shim disconnected" id=8a9ad47a050636503ac0addabdbd3c2ece63bc3df067fb31820922fabb18a943 namespace=moby
	Dec 07 20:05:11 addons-210000 dockerd[1111]: time="2023-12-07T20:05:11.542727288Z" level=warning msg="cleaning up after shim disconnected" id=8a9ad47a050636503ac0addabdbd3c2ece63bc3df067fb31820922fabb18a943 namespace=moby
	Dec 07 20:05:11 addons-210000 dockerd[1111]: time="2023-12-07T20:05:11.542731497Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 07 20:05:11 addons-210000 dockerd[1111]: time="2023-12-07T20:05:11.547037689Z" level=warning msg="cleanup warnings time=\"2023-12-07T20:05:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Dec 07 20:05:12 addons-210000 dockerd[1111]: time="2023-12-07T20:05:12.953813825Z" level=info msg="shim disconnected" id=3ad63463da534969acd7952ce48c18e4e6fb84731573cda0928290323f64611e namespace=moby
	Dec 07 20:05:12 addons-210000 dockerd[1111]: time="2023-12-07T20:05:12.953856826Z" level=warning msg="cleaning up after shim disconnected" id=3ad63463da534969acd7952ce48c18e4e6fb84731573cda0928290323f64611e namespace=moby
	Dec 07 20:05:12 addons-210000 dockerd[1111]: time="2023-12-07T20:05:12.953861326Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 07 20:05:12 addons-210000 dockerd[1105]: time="2023-12-07T20:05:12.954064829Z" level=info msg="ignoring event" container=3ad63463da534969acd7952ce48c18e4e6fb84731573cda0928290323f64611e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED              STATE               NAME                       ATTEMPT             POD ID              POD
	8a9ad47a05063       fc9db2894f4e4                                                                                                                3 seconds ago        Exited              helper-pod                 0                   3ad63463da534       helper-pod-delete-pvc-1d29993c-57ff-4743-8c6b-7badd87bfdbd
	b72a3cf93d569       dd1b12fcb6097                                                                                                                6 seconds ago        Exited              hello-world-app            2                   b04fa1a37e8a4       hello-world-app-5d77478584-ds6ll
	0212c7f9ce3c3       busybox@sha256:1ceb872bcc68a8fcd34c97952658b58086affdcb604c90c1dee2735bde5edc2f                                              7 seconds ago        Exited              busybox                    0                   317d58e783a7d       test-local-path
	3ee5f65549545       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                              12 seconds ago       Exited              helper-pod                 0                   cec41b4ceb96f       helper-pod-create-pvc-1d29993c-57ff-4743-8c6b-7badd87bfdbd
	d1ebe90fec440       nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                                                29 seconds ago       Running             nginx                      0                   c75f677771522       nginx
	a98e34099252c       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:d730651bb6584f969d95d8279a754cf9d8d31b5055c43dbdb8d7363a8c6371cf                 About a minute ago   Running             gcp-auth                   0                   e5d706fa2664c       gcp-auth-d4c87556c-lxklp
	1798d5fde1e8c       af594c6a879f2                                                                                                                About a minute ago   Exited              patch                      1                   0d38d7a327b40       ingress-nginx-admission-patch-8fxc2
	b1878afca6afc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80   About a minute ago   Exited              create                     0                   2f51201d0e4be       ingress-nginx-admission-create-nrq5r
	8e12d78e42679       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       2 minutes ago        Running             local-path-provisioner     0                   b310ff0403d0c       local-path-provisioner-78b46b4d5c-5bvzx
	c4f044ed328b4       nvcr.io/nvidia/k8s-device-plugin@sha256:339be23400f58c04f09b6ba1d4d2e0e7120648f2b114880513685b22093311f1                     2 minutes ago        Running             nvidia-device-plugin-ctr   0                   9cc496b9a778c       nvidia-device-plugin-daemonset-6slck
	4eb80cdba68fd       gcr.io/cloud-spanner-emulator/emulator@sha256:9ded3fac22d4d1c85ae51473e3876e2377f5179192fea664409db0fe87e05ece               2 minutes ago        Running             cloud-spanner-emulator     0                   2ecaafcf38caa       cloud-spanner-emulator-5649c69bf6-wh4h8
	9af1db4f71cd3       ba04bb24b9575                                                                                                                2 minutes ago        Running             storage-provisioner        0                   69430d5343098       storage-provisioner
	d8322b0b38b66       3ca3ca488cf13                                                                                                                2 minutes ago        Running             kube-proxy                 0                   fec6994c29351       kube-proxy-slqxw
	d382adfc8ea3f       97e04611ad434                                                                                                                2 minutes ago        Running             coredns                    0                   fe6bbd0c0aa6b       coredns-5dd5756b68-v7vzh
	22d96034aa264       04b4c447bb9d4                                                                                                                2 minutes ago        Running             kube-apiserver             0                   eb8116d10025c       kube-apiserver-addons-210000
	4ef8b120fe83e       9961cbceaf234                                                                                                                2 minutes ago        Running             kube-controller-manager    0                   a0481a15e56c6       kube-controller-manager-addons-210000
	7df8963913f83       9cdd6470f48c8                                                                                                                2 minutes ago        Running             etcd                       0                   cd3f906180f81       etcd-addons-210000
	648518e442844       05c284c929889                                                                                                                2 minutes ago        Running             kube-scheduler             0                   905d444e3984e       kube-scheduler-addons-210000
	
	* 
	* ==> coredns [d382adfc8ea3] <==
	* [INFO] 10.244.0.18:52418 - 27140 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000034582s
	[INFO] 10.244.0.18:36832 - 64614 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000020207s
	[INFO] 10.244.0.18:52418 - 11033 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000032207s
	[INFO] 10.244.0.18:52418 - 40247 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000066872s
	[INFO] 10.244.0.18:36832 - 52884 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000032749s
	[INFO] 10.244.0.18:52418 - 61775 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000034749s
	[INFO] 10.244.0.18:36832 - 35833 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000064205s
	[INFO] 10.244.0.18:36832 - 59149 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000012457s
	[INFO] 10.244.0.18:36832 - 26126 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000011708s
	[INFO] 10.244.0.18:52418 - 25641 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000083955s
	[INFO] 10.244.0.18:36832 - 38819 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000015874s
	[INFO] 10.244.0.18:40485 - 24796 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000033374s
	[INFO] 10.244.0.18:40480 - 23092 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00018895s
	[INFO] 10.244.0.18:40480 - 25901 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000013374s
	[INFO] 10.244.0.18:40485 - 55503 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000032665s
	[INFO] 10.244.0.18:40480 - 53677 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000013249s
	[INFO] 10.244.0.18:40485 - 51907 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000011875s
	[INFO] 10.244.0.18:40485 - 51310 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00001125s
	[INFO] 10.244.0.18:40480 - 28946 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000063498s
	[INFO] 10.244.0.18:40480 - 52752 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000010457s
	[INFO] 10.244.0.18:40485 - 62889 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000011041s
	[INFO] 10.244.0.18:40480 - 23545 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000013666s
	[INFO] 10.244.0.18:40485 - 3216 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000010583s
	[INFO] 10.244.0.18:40480 - 22546 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000012374s
	[INFO] 10.244.0.18:40485 - 51733 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000010374s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-210000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-210000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c
	                    minikube.k8s.io/name=addons-210000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_07T12_02_37_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-210000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Dec 2023 20:02:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-210000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Dec 2023 20:05:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Dec 2023 20:05:09 +0000   Thu, 07 Dec 2023 20:02:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Dec 2023 20:05:09 +0000   Thu, 07 Dec 2023 20:02:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Dec 2023 20:05:09 +0000   Thu, 07 Dec 2023 20:02:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Dec 2023 20:05:09 +0000   Thu, 07 Dec 2023 20:02:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-210000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904696Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904696Ki
	  pods:               110
	System Info:
	  Machine ID:                 1491c5954ad342bbbcb3ede22f01bdea
	  System UUID:                1491c5954ad342bbbcb3ede22f01bdea
	  Boot ID:                    aa7bd9fa-b109-404e-ac09-5762a289ed1b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5649c69bf6-wh4h8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	  default                     hello-world-app-5d77478584-ds6ll           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23s
	  default                     nginx                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         33s
	  gcp-auth                    gcp-auth-d4c87556c-lxklp                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m16s
	  kube-system                 coredns-5dd5756b68-v7vzh                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m24s
	  kube-system                 etcd-addons-210000                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m37s
	  kube-system                 kube-apiserver-addons-210000               250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 kube-controller-manager-addons-210000      200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kube-proxy-slqxw                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m25s
	  kube-system                 kube-scheduler-addons-210000               100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  kube-system                 nvidia-device-plugin-daemonset-6slck       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	  kube-system                 storage-provisioner                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	  local-path-storage          local-path-provisioner-78b46b4d5c-5bvzx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m23s  kube-proxy       
	  Normal  Starting                 2m38s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m38s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m37s  kubelet          Node addons-210000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m37s  kubelet          Node addons-210000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m37s  kubelet          Node addons-210000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m36s  kubelet          Node addons-210000 status is now: NodeReady
	  Normal  RegisteredNode           2m26s  node-controller  Node addons-210000 event: Registered Node addons-210000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.061216] systemd-fstab-generator[726]: Ignoring "noauto" for root device
	[  +1.223289] systemd-fstab-generator[903]: Ignoring "noauto" for root device
	[  +0.066238] systemd-fstab-generator[926]: Ignoring "noauto" for root device
	[  +0.065215] systemd-fstab-generator[937]: Ignoring "noauto" for root device
	[  +0.064610] systemd-fstab-generator[948]: Ignoring "noauto" for root device
	[  +0.070309] systemd-fstab-generator[985]: Ignoring "noauto" for root device
	[  +2.611231] systemd-fstab-generator[1098]: Ignoring "noauto" for root device
	[  +2.773889] kauditd_printk_skb: 137 callbacks suppressed
	[  +2.786892] systemd-fstab-generator[1459]: Ignoring "noauto" for root device
	[  +5.099620] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.012595] systemd-fstab-generator[2374]: Ignoring "noauto" for root device
	[ +13.799962] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.572249] kauditd_printk_skb: 105 callbacks suppressed
	[Dec 7 20:03] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[  +6.533777] kauditd_printk_skb: 6 callbacks suppressed
	[ +25.764758] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.548973] kauditd_printk_skb: 4 callbacks suppressed
	[Dec 7 20:04] kauditd_printk_skb: 9 callbacks suppressed
	[ +21.490885] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.378200] kauditd_printk_skb: 17 callbacks suppressed
	[ +10.796413] kauditd_printk_skb: 4 callbacks suppressed
	[  +9.245621] kauditd_printk_skb: 6 callbacks suppressed
	[  +9.226485] kauditd_printk_skb: 21 callbacks suppressed
	[Dec 7 20:05] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.313466] kauditd_printk_skb: 14 callbacks suppressed
	
	* 
	* ==> etcd [7df8963913f8] <==
	* {"level":"info","ts":"2023-12-07T20:02:33.341219Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-07T20:02:33.34141Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-07T20:02:33.341446Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-07T20:02:33.341477Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T20:02:33.34154Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T20:02:33.34158Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T20:02:33.341605Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-07T20:02:33.342037Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2023-12-07T20:02:33.369234Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2023-12-07T20:02:56.220858Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.385519ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:17 size:78073"}
	{"level":"info","ts":"2023-12-07T20:02:56.22089Z","caller":"traceutil/trace.go:171","msg":"trace[376953957] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:17; response_revision:734; }","duration":"131.424427ms","start":"2023-12-07T20:02:56.08946Z","end":"2023-12-07T20:02:56.220884Z","steps":["trace[376953957] 'range keys from in-memory index tree'  (duration: 131.270586ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-07T20:03:05.092526Z","caller":"traceutil/trace.go:171","msg":"trace[1212192610] linearizableReadLoop","detail":"{readStateIndex:829; appliedIndex:828; }","duration":"121.947097ms","start":"2023-12-07T20:03:04.970571Z","end":"2023-12-07T20:03:05.092518Z","steps":["trace[1212192610] 'read index received'  (duration: 121.840131ms)","trace[1212192610] 'applied index is now lower than readState.Index'  (duration: 106.462µs)"],"step_count":2}
	{"level":"info","ts":"2023-12-07T20:03:05.092645Z","caller":"traceutil/trace.go:171","msg":"trace[1746306397] transaction","detail":"{read_only:false; response_revision:810; number_of_response:1; }","duration":"211.018603ms","start":"2023-12-07T20:03:04.881623Z","end":"2023-12-07T20:03:05.092642Z","steps":["trace[1746306397] 'process raft request'  (duration: 210.810926ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-07T20:03:05.09272Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.156806ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-12-07T20:03:05.092734Z","caller":"traceutil/trace.go:171","msg":"trace[320054940] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:810; }","duration":"122.176747ms","start":"2023-12-07T20:03:04.970554Z","end":"2023-12-07T20:03:05.092731Z","steps":["trace[320054940] 'agreement among raft nodes before linearized reading'  (duration: 122.149502ms)"],"step_count":1}
	{"level":"warn","ts":"2023-12-07T20:03:05.092784Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.3301ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13485"}
	{"level":"info","ts":"2023-12-07T20:03:05.092795Z","caller":"traceutil/trace.go:171","msg":"trace[1263501605] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:810; }","duration":"111.341813ms","start":"2023-12-07T20:03:04.981451Z","end":"2023-12-07T20:03:05.092793Z","steps":["trace[1263501605] 'agreement among raft nodes before linearized reading'  (duration: 111.317548ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-07T20:04:05.701742Z","caller":"traceutil/trace.go:171","msg":"trace[1407711444] linearizableReadLoop","detail":"{readStateIndex:1117; appliedIndex:1116; }","duration":"147.008861ms","start":"2023-12-07T20:04:05.554725Z","end":"2023-12-07T20:04:05.701734Z","steps":["trace[1407711444] 'read index received'  (duration: 146.884714ms)","trace[1407711444] 'applied index is now lower than readState.Index'  (duration: 123.605µs)"],"step_count":2}
	{"level":"warn","ts":"2023-12-07T20:04:05.701806Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.082123ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13852"}
	{"level":"info","ts":"2023-12-07T20:04:05.701817Z","caller":"traceutil/trace.go:171","msg":"trace[1994320809] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1083; }","duration":"147.105669ms","start":"2023-12-07T20:04:05.554708Z","end":"2023-12-07T20:04:05.701814Z","steps":["trace[1994320809] 'agreement among raft nodes before linearized reading'  (duration: 147.059411ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-07T20:04:05.701879Z","caller":"traceutil/trace.go:171","msg":"trace[433100190] transaction","detail":"{read_only:false; response_revision:1083; number_of_response:1; }","duration":"167.49416ms","start":"2023-12-07T20:04:05.534382Z","end":"2023-12-07T20:04:05.701876Z","steps":["trace[433100190] 'process raft request'  (duration: 167.298291ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-07T20:04:22.179508Z","caller":"traceutil/trace.go:171","msg":"trace[526965316] linearizableReadLoop","detail":"{readStateIndex:1210; appliedIndex:1209; }","duration":"162.036588ms","start":"2023-12-07T20:04:22.01746Z","end":"2023-12-07T20:04:22.179496Z","steps":["trace[526965316] 'read index received'  (duration: 161.938325ms)","trace[526965316] 'applied index is now lower than readState.Index'  (duration: 97.805µs)"],"step_count":2}
	{"level":"warn","ts":"2023-12-07T20:04:22.179583Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.122058ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:8696"}
	{"level":"info","ts":"2023-12-07T20:04:22.179597Z","caller":"traceutil/trace.go:171","msg":"trace[975785606] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1171; }","duration":"162.148811ms","start":"2023-12-07T20:04:22.017445Z","end":"2023-12-07T20:04:22.179593Z","steps":["trace[975785606] 'agreement among raft nodes before linearized reading'  (duration: 162.098805ms)"],"step_count":1}
	{"level":"info","ts":"2023-12-07T20:04:22.179604Z","caller":"traceutil/trace.go:171","msg":"trace[1985666502] transaction","detail":"{read_only:false; response_revision:1171; number_of_response:1; }","duration":"200.940796ms","start":"2023-12-07T20:04:21.978658Z","end":"2023-12-07T20:04:22.179599Z","steps":["trace[1985666502] 'process raft request'  (duration: 200.761563ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [a98e34099252] <==
	* 2023/12/07 20:04:09 GCP Auth Webhook started!
	2023/12/07 20:04:17 Ready to marshal response ...
	2023/12/07 20:04:17 Ready to write response ...
	2023/12/07 20:04:19 Ready to marshal response ...
	2023/12/07 20:04:19 Ready to write response ...
	2023/12/07 20:04:41 Ready to marshal response ...
	2023/12/07 20:04:41 Ready to write response ...
	2023/12/07 20:04:45 Ready to marshal response ...
	2023/12/07 20:04:45 Ready to write response ...
	2023/12/07 20:04:51 Ready to marshal response ...
	2023/12/07 20:04:51 Ready to write response ...
	2023/12/07 20:05:00 Ready to marshal response ...
	2023/12/07 20:05:00 Ready to write response ...
	2023/12/07 20:05:00 Ready to marshal response ...
	2023/12/07 20:05:00 Ready to write response ...
	2023/12/07 20:05:10 Ready to marshal response ...
	2023/12/07 20:05:10 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  20:05:14 up 2 min,  0 users,  load average: 0.61, 0.43, 0.18
	Linux addons-210000 5.10.57 #1 SMP PREEMPT Tue Dec 5 16:07:42 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [22d96034aa26] <==
	* W1207 20:04:37.280399       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1207 20:04:41.732849       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I1207 20:04:41.851362       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.130.177"}
	I1207 20:04:51.095816       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.25.74"}
	I1207 20:05:00.196551       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 20:05:00.196567       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1207 20:05:00.200959       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 20:05:00.200979       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1207 20:05:00.206478       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 20:05:00.206491       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1207 20:05:00.211096       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 20:05:00.211114       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1207 20:05:00.221672       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 20:05:00.221698       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1207 20:05:00.227897       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 20:05:00.227915       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1207 20:05:00.237873       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 20:05:00.237892       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1207 20:05:00.243490       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1207 20:05:00.243516       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1207 20:05:01.206517       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1207 20:05:01.238747       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1207 20:05:01.250542       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E1207 20:05:07.081613       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E1207 20:05:08.080549       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	* 
	* ==> kube-controller-manager [4ef8b120fe83] <==
	* E1207 20:05:01.251048       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
	W1207 20:05:02.641802       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1207 20:05:02.641820       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1207 20:05:02.667772       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1207 20:05:02.667783       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1207 20:05:02.712652       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1207 20:05:02.712660       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1207 20:05:04.072039       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	W1207 20:05:05.003491       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1207 20:05:05.003513       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1207 20:05:05.246206       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1207 20:05:05.246238       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1207 20:05:05.580344       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1207 20:05:05.580363       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1207 20:05:07.039307       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1207 20:05:07.039953       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="2.584µs"
	I1207 20:05:07.046320       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W1207 20:05:09.095473       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1207 20:05:09.095492       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1207 20:05:09.869763       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="51.209µs"
	W1207 20:05:09.968053       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1207 20:05:09.968091       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1207 20:05:09.994747       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1207 20:05:09.994759       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1207 20:05:11.229993       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="3.75µs"
	
	* 
	* ==> kube-proxy [d8322b0b38b6] <==
	* I1207 20:02:50.921929       1 server_others.go:69] "Using iptables proxy"
	I1207 20:02:50.939033       1 node.go:141] Successfully retrieved node IP: 192.168.105.2
	I1207 20:02:51.017533       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1207 20:02:51.017668       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1207 20:02:51.019427       1 server_others.go:152] "Using iptables Proxier"
	I1207 20:02:51.019479       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1207 20:02:51.019578       1 server.go:846] "Version info" version="v1.28.4"
	I1207 20:02:51.019584       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 20:02:51.030037       1 config.go:188] "Starting service config controller"
	I1207 20:02:51.030049       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1207 20:02:51.030062       1 config.go:97] "Starting endpoint slice config controller"
	I1207 20:02:51.030064       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1207 20:02:51.034952       1 config.go:315] "Starting node config controller"
	I1207 20:02:51.034975       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1207 20:02:51.130428       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1207 20:02:51.130452       1 shared_informer.go:318] Caches are synced for service config
	I1207 20:02:51.135114       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [648518e44284] <==
	* W1207 20:02:34.196660       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1207 20:02:34.196684       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1207 20:02:34.196711       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1207 20:02:34.196728       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1207 20:02:34.196756       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1207 20:02:34.196777       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1207 20:02:34.196810       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1207 20:02:34.196955       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1207 20:02:34.197019       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1207 20:02:34.197047       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1207 20:02:34.197079       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1207 20:02:34.197095       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1207 20:02:35.145685       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1207 20:02:35.145700       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1207 20:02:35.189315       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1207 20:02:35.189342       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1207 20:02:35.197424       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1207 20:02:35.197441       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1207 20:02:35.198953       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1207 20:02:35.198976       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1207 20:02:35.215629       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1207 20:02:35.215647       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1207 20:02:35.232416       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1207 20:02:35.232506       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1207 20:02:35.488677       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-12-07 20:02:18 UTC, ends at Thu 2023-12-07 20:05:14 UTC. --
	Dec 07 20:05:10 addons-210000 kubelet[2392]: E1207 20:05:10.988320    2392 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0c6079d2-1409-4b04-8ed6-16f22dfd1059" containerName="busybox"
	Dec 07 20:05:10 addons-210000 kubelet[2392]: E1207 20:05:10.988323    2392 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="695f8fad-d1b5-4ce3-9bc1-e03fcf968403" containerName="minikube-ingress-dns"
	Dec 07 20:05:10 addons-210000 kubelet[2392]: I1207 20:05:10.988338    2392 memory_manager.go:346] "RemoveStaleState removing state" podUID="695f8fad-d1b5-4ce3-9bc1-e03fcf968403" containerName="minikube-ingress-dns"
	Dec 07 20:05:10 addons-210000 kubelet[2392]: I1207 20:05:10.988341    2392 memory_manager.go:346] "RemoveStaleState removing state" podUID="0c6079d2-1409-4b04-8ed6-16f22dfd1059" containerName="busybox"
	Dec 07 20:05:10 addons-210000 kubelet[2392]: I1207 20:05:10.988344    2392 memory_manager.go:346] "RemoveStaleState removing state" podUID="695f8fad-d1b5-4ce3-9bc1-e03fcf968403" containerName="minikube-ingress-dns"
	Dec 07 20:05:10 addons-210000 kubelet[2392]: I1207 20:05:10.988348    2392 memory_manager.go:346] "RemoveStaleState removing state" podUID="6f282c2c-4f7b-455b-9c56-f2a7a6b20a2d" containerName="controller"
	Dec 07 20:05:10 addons-210000 kubelet[2392]: I1207 20:05:10.988350    2392 memory_manager.go:346] "RemoveStaleState removing state" podUID="695f8fad-d1b5-4ce3-9bc1-e03fcf968403" containerName="minikube-ingress-dns"
	Dec 07 20:05:10 addons-210000 kubelet[2392]: I1207 20:05:10.988353    2392 memory_manager.go:346] "RemoveStaleState removing state" podUID="695f8fad-d1b5-4ce3-9bc1-e03fcf968403" containerName="minikube-ingress-dns"
	Dec 07 20:05:11 addons-210000 kubelet[2392]: I1207 20:05:11.096253    2392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/645c69a0-1fa1-4932-ba0a-535497946409-data\") pod \"helper-pod-delete-pvc-1d29993c-57ff-4743-8c6b-7badd87bfdbd\" (UID: \"645c69a0-1fa1-4932-ba0a-535497946409\") " pod="local-path-storage/helper-pod-delete-pvc-1d29993c-57ff-4743-8c6b-7badd87bfdbd"
	Dec 07 20:05:11 addons-210000 kubelet[2392]: I1207 20:05:11.096289    2392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxgnl\" (UniqueName: \"kubernetes.io/projected/645c69a0-1fa1-4932-ba0a-535497946409-kube-api-access-nxgnl\") pod \"helper-pod-delete-pvc-1d29993c-57ff-4743-8c6b-7badd87bfdbd\" (UID: \"645c69a0-1fa1-4932-ba0a-535497946409\") " pod="local-path-storage/helper-pod-delete-pvc-1d29993c-57ff-4743-8c6b-7badd87bfdbd"
	Dec 07 20:05:11 addons-210000 kubelet[2392]: I1207 20:05:11.096313    2392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/645c69a0-1fa1-4932-ba0a-535497946409-script\") pod \"helper-pod-delete-pvc-1d29993c-57ff-4743-8c6b-7badd87bfdbd\" (UID: \"645c69a0-1fa1-4932-ba0a-535497946409\") " pod="local-path-storage/helper-pod-delete-pvc-1d29993c-57ff-4743-8c6b-7badd87bfdbd"
	Dec 07 20:05:11 addons-210000 kubelet[2392]: I1207 20:05:11.096334    2392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/645c69a0-1fa1-4932-ba0a-535497946409-gcp-creds\") pod \"helper-pod-delete-pvc-1d29993c-57ff-4743-8c6b-7badd87bfdbd\" (UID: \"645c69a0-1fa1-4932-ba0a-535497946409\") " pod="local-path-storage/helper-pod-delete-pvc-1d29993c-57ff-4743-8c6b-7badd87bfdbd"
	Dec 07 20:05:13 addons-210000 kubelet[2392]: I1207 20:05:13.107692    2392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/645c69a0-1fa1-4932-ba0a-535497946409-gcp-creds\") pod \"645c69a0-1fa1-4932-ba0a-535497946409\" (UID: \"645c69a0-1fa1-4932-ba0a-535497946409\") "
	Dec 07 20:05:13 addons-210000 kubelet[2392]: I1207 20:05:13.107717    2392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/645c69a0-1fa1-4932-ba0a-535497946409-script\") pod \"645c69a0-1fa1-4932-ba0a-535497946409\" (UID: \"645c69a0-1fa1-4932-ba0a-535497946409\") "
	Dec 07 20:05:13 addons-210000 kubelet[2392]: I1207 20:05:13.107842    2392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxgnl\" (UniqueName: \"kubernetes.io/projected/645c69a0-1fa1-4932-ba0a-535497946409-kube-api-access-nxgnl\") pod \"645c69a0-1fa1-4932-ba0a-535497946409\" (UID: \"645c69a0-1fa1-4932-ba0a-535497946409\") "
	Dec 07 20:05:13 addons-210000 kubelet[2392]: I1207 20:05:13.107851    2392 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/645c69a0-1fa1-4932-ba0a-535497946409-data\") pod \"645c69a0-1fa1-4932-ba0a-535497946409\" (UID: \"645c69a0-1fa1-4932-ba0a-535497946409\") "
	Dec 07 20:05:13 addons-210000 kubelet[2392]: I1207 20:05:13.107868    2392 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/645c69a0-1fa1-4932-ba0a-535497946409-data" (OuterVolumeSpecName: "data") pod "645c69a0-1fa1-4932-ba0a-535497946409" (UID: "645c69a0-1fa1-4932-ba0a-535497946409"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Dec 07 20:05:13 addons-210000 kubelet[2392]: I1207 20:05:13.107826    2392 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/645c69a0-1fa1-4932-ba0a-535497946409-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "645c69a0-1fa1-4932-ba0a-535497946409" (UID: "645c69a0-1fa1-4932-ba0a-535497946409"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Dec 07 20:05:13 addons-210000 kubelet[2392]: I1207 20:05:13.107981    2392 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/645c69a0-1fa1-4932-ba0a-535497946409-script" (OuterVolumeSpecName: "script") pod "645c69a0-1fa1-4932-ba0a-535497946409" (UID: "645c69a0-1fa1-4932-ba0a-535497946409"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Dec 07 20:05:13 addons-210000 kubelet[2392]: I1207 20:05:13.108947    2392 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/645c69a0-1fa1-4932-ba0a-535497946409-kube-api-access-nxgnl" (OuterVolumeSpecName: "kube-api-access-nxgnl") pod "645c69a0-1fa1-4932-ba0a-535497946409" (UID: "645c69a0-1fa1-4932-ba0a-535497946409"). InnerVolumeSpecName "kube-api-access-nxgnl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 07 20:05:13 addons-210000 kubelet[2392]: I1207 20:05:13.208074    2392 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nxgnl\" (UniqueName: \"kubernetes.io/projected/645c69a0-1fa1-4932-ba0a-535497946409-kube-api-access-nxgnl\") on node \"addons-210000\" DevicePath \"\""
	Dec 07 20:05:13 addons-210000 kubelet[2392]: I1207 20:05:13.208108    2392 reconciler_common.go:300] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/645c69a0-1fa1-4932-ba0a-535497946409-data\") on node \"addons-210000\" DevicePath \"\""
	Dec 07 20:05:13 addons-210000 kubelet[2392]: I1207 20:05:13.208114    2392 reconciler_common.go:300] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/645c69a0-1fa1-4932-ba0a-535497946409-gcp-creds\") on node \"addons-210000\" DevicePath \"\""
	Dec 07 20:05:13 addons-210000 kubelet[2392]: I1207 20:05:13.208119    2392 reconciler_common.go:300] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/645c69a0-1fa1-4932-ba0a-535497946409-script\") on node \"addons-210000\" DevicePath \"\""
	Dec 07 20:05:13 addons-210000 kubelet[2392]: I1207 20:05:13.901591    2392 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ad63463da534969acd7952ce48c18e4e6fb84731573cda0928290323f64611e"
	
	* 
	* ==> storage-provisioner [9af1db4f71cd] <==
	* I1207 20:02:51.354189       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1207 20:02:51.359753       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 20:02:51.359781       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1207 20:02:51.370225       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 20:02:51.374604       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-210000_5cde3482-f602-4b2a-81bb-44b0de0b454f!
	I1207 20:02:51.377203       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f41cb332-ba50-4eb9-ac50-11d82f88dcb0", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-210000_5cde3482-f602-4b2a-81bb-44b0de0b454f became leader
	I1207 20:02:51.475679       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-210000_5cde3482-f602-4b2a-81bb-44b0de0b454f!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-210000 -n addons-210000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-210000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: helper-pod-delete-pvc-1d29993c-57ff-4743-8c6b-7badd87bfdbd
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-210000 describe pod helper-pod-delete-pvc-1d29993c-57ff-4743-8c6b-7badd87bfdbd
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-210000 describe pod helper-pod-delete-pvc-1d29993c-57ff-4743-8c6b-7badd87bfdbd: exit status 1 (39.306ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "helper-pod-delete-pvc-1d29993c-57ff-4743-8c6b-7badd87bfdbd" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-210000 describe pod helper-pod-delete-pvc-1d29993c-57ff-4743-8c6b-7badd87bfdbd: exit status 1
--- FAIL: TestAddons/parallel/Ingress (33.57s)

                                                
                                    
x
+
TestCertOptions (10.1s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-949000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
E1207 12:20:15.098210    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-949000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.812459792s)

                                                
                                                
-- stdout --
	* [cert-options-949000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-options-949000 in cluster cert-options-949000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-949000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-949000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-949000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-949000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-949000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 89 (79.300458ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-949000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-949000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 89
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-949000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-949000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-949000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 89 (43.201ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-949000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-949000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 89
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-949000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2023-12-07 12:20:22.962616 -0800 PST m=+1207.316666835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-949000 -n cert-options-949000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-949000 -n cert-options-949000: exit status 7 (31.55ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-949000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-949000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-949000
--- FAIL: TestCertOptions (10.10s)

                                                
                                    
x
+
TestCertExpiration (195.46s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-552000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-552000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.043961875s)

                                                
                                                
-- stdout --
	* [cert-expiration-552000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-expiration-552000 in cluster cert-expiration-552000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-552000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-552000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-552000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-552000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-552000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.238961083s)

                                                
                                                
-- stdout --
	* [cert-expiration-552000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-552000 in cluster cert-expiration-552000
	* Restarting existing qemu2 VM for "cert-expiration-552000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-552000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-552000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-552000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-552000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-552000 in cluster cert-expiration-552000
	* Restarting existing qemu2 VM for "cert-expiration-552000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-552000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-552000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-12-07 12:23:23.121371 -0800 PST m=+1387.478768668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-552000 -n cert-expiration-552000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-552000 -n cert-expiration-552000: exit status 7 (70.979792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-552000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-552000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-552000
--- FAIL: TestCertExpiration (195.46s)

                                                
                                    
x
+
TestDockerFlags (10.24s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-439000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-439000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.980851791s)

                                                
                                                
-- stdout --
	* [docker-flags-439000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node docker-flags-439000 in cluster docker-flags-439000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-439000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:20:02.784649    3848 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:20:02.784801    3848 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:20:02.784805    3848 out.go:309] Setting ErrFile to fd 2...
	I1207 12:20:02.784807    3848 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:20:02.784960    3848 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:20:02.785993    3848 out.go:303] Setting JSON to false
	I1207 12:20:02.801686    3848 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2973,"bootTime":1701977429,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:20:02.801788    3848 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:20:02.807537    3848 out.go:177] * [docker-flags-439000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:20:02.812431    3848 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:20:02.812503    3848 notify.go:220] Checking for updates...
	I1207 12:20:02.817956    3848 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:20:02.821452    3848 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:20:02.824483    3848 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:20:02.827473    3848 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:20:02.830471    3848 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:20:02.833829    3848 config.go:182] Loaded profile config "force-systemd-flag-413000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:20:02.833897    3848 config.go:182] Loaded profile config "multinode-554000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:20:02.833945    3848 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:20:02.838495    3848 out.go:177] * Using the qemu2 driver based on user configuration
	I1207 12:20:02.845429    3848 start.go:298] selected driver: qemu2
	I1207 12:20:02.845436    3848 start.go:902] validating driver "qemu2" against <nil>
	I1207 12:20:02.845443    3848 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:20:02.847743    3848 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 12:20:02.851398    3848 out.go:177] * Automatically selected the socket_vmnet network
	I1207 12:20:02.854448    3848 start_flags.go:926] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1207 12:20:02.854488    3848 cni.go:84] Creating CNI manager for ""
	I1207 12:20:02.854496    3848 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 12:20:02.854500    3848 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1207 12:20:02.854506    3848 start_flags.go:323] config:
	{Name:docker-flags-439000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-439000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/ru
n/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:20:02.858929    3848 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:20:02.866314    3848 out.go:177] * Starting control plane node docker-flags-439000 in cluster docker-flags-439000
	I1207 12:20:02.870440    3848 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 12:20:02.870457    3848 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1207 12:20:02.870471    3848 cache.go:56] Caching tarball of preloaded images
	I1207 12:20:02.870535    3848 preload.go:174] Found /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 12:20:02.870541    3848 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1207 12:20:02.870607    3848 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/docker-flags-439000/config.json ...
	I1207 12:20:02.870618    3848 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/docker-flags-439000/config.json: {Name:mk4673e2cb1acde828855e91502d448bd234cda7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:20:02.870831    3848 start.go:365] acquiring machines lock for docker-flags-439000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:20:02.870871    3848 start.go:369] acquired machines lock for "docker-flags-439000" in 33.083µs
	I1207 12:20:02.870883    3848 start.go:93] Provisioning new machine with config: &{Name:docker-flags-439000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-439000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:20:02.870918    3848 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:20:02.878458    3848 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1207 12:20:02.895223    3848 start.go:159] libmachine.API.Create for "docker-flags-439000" (driver="qemu2")
	I1207 12:20:02.895258    3848 client.go:168] LocalClient.Create starting
	I1207 12:20:02.895319    3848 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:20:02.895355    3848 main.go:141] libmachine: Decoding PEM data...
	I1207 12:20:02.895367    3848 main.go:141] libmachine: Parsing certificate...
	I1207 12:20:02.895414    3848 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:20:02.895436    3848 main.go:141] libmachine: Decoding PEM data...
	I1207 12:20:02.895444    3848 main.go:141] libmachine: Parsing certificate...
	I1207 12:20:02.895820    3848 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:20:03.021206    3848 main.go:141] libmachine: Creating SSH key...
	I1207 12:20:03.176799    3848 main.go:141] libmachine: Creating Disk image...
	I1207 12:20:03.176805    3848 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:20:03.176994    3848 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/docker-flags-439000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/docker-flags-439000/disk.qcow2
	I1207 12:20:03.189590    3848 main.go:141] libmachine: STDOUT: 
	I1207 12:20:03.189610    3848 main.go:141] libmachine: STDERR: 
	I1207 12:20:03.189668    3848 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/docker-flags-439000/disk.qcow2 +20000M
	I1207 12:20:03.200268    3848 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:20:03.200282    3848 main.go:141] libmachine: STDERR: 
	I1207 12:20:03.200297    3848 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/docker-flags-439000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/docker-flags-439000/disk.qcow2
	I1207 12:20:03.200302    3848 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:20:03.200342    3848 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/docker-flags-439000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/docker-flags-439000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/docker-flags-439000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:3a:2e:7a:73:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/docker-flags-439000/disk.qcow2
	I1207 12:20:03.202018    3848 main.go:141] libmachine: STDOUT: 
	I1207 12:20:03.202033    3848 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:20:03.202053    3848 client.go:171] LocalClient.Create took 306.793209ms
	I1207 12:20:05.204205    3848 start.go:128] duration metric: createHost completed in 2.333303375s
	I1207 12:20:05.204293    3848 start.go:83] releasing machines lock for "docker-flags-439000", held for 2.333457209s
	W1207 12:20:05.204383    3848 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:20:05.222487    3848 out.go:177] * Deleting "docker-flags-439000" in qemu2 ...
	W1207 12:20:05.238423    3848 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:20:05.238528    3848 start.go:709] Will try again in 5 seconds ...
	I1207 12:20:10.240613    3848 start.go:365] acquiring machines lock for docker-flags-439000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:20:10.240930    3848 start.go:369] acquired machines lock for "docker-flags-439000" in 221.208µs
	I1207 12:20:10.241015    3848 start.go:93] Provisioning new machine with config: &{Name:docker-flags-439000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-439000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:20:10.241181    3848 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:20:10.249210    3848 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1207 12:20:10.289480    3848 start.go:159] libmachine.API.Create for "docker-flags-439000" (driver="qemu2")
	I1207 12:20:10.289545    3848 client.go:168] LocalClient.Create starting
	I1207 12:20:10.289735    3848 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:20:10.289805    3848 main.go:141] libmachine: Decoding PEM data...
	I1207 12:20:10.289826    3848 main.go:141] libmachine: Parsing certificate...
	I1207 12:20:10.289895    3848 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:20:10.289941    3848 main.go:141] libmachine: Decoding PEM data...
	I1207 12:20:10.289956    3848 main.go:141] libmachine: Parsing certificate...
	I1207 12:20:10.291085    3848 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:20:10.431596    3848 main.go:141] libmachine: Creating SSH key...
	I1207 12:20:10.662406    3848 main.go:141] libmachine: Creating Disk image...
	I1207 12:20:10.662420    3848 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:20:10.662635    3848 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/docker-flags-439000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/docker-flags-439000/disk.qcow2
	I1207 12:20:10.675643    3848 main.go:141] libmachine: STDOUT: 
	I1207 12:20:10.675665    3848 main.go:141] libmachine: STDERR: 
	I1207 12:20:10.675734    3848 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/docker-flags-439000/disk.qcow2 +20000M
	I1207 12:20:10.686273    3848 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:20:10.686289    3848 main.go:141] libmachine: STDERR: 
	I1207 12:20:10.686305    3848 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/docker-flags-439000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/docker-flags-439000/disk.qcow2
	I1207 12:20:10.686311    3848 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:20:10.686360    3848 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/docker-flags-439000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/docker-flags-439000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/docker-flags-439000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:bd:c1:3a:7c:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/docker-flags-439000/disk.qcow2
	I1207 12:20:10.688044    3848 main.go:141] libmachine: STDOUT: 
	I1207 12:20:10.688061    3848 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:20:10.688075    3848 client.go:171] LocalClient.Create took 398.5305ms
	I1207 12:20:12.690215    3848 start.go:128] duration metric: createHost completed in 2.449044417s
	I1207 12:20:12.690290    3848 start.go:83] releasing machines lock for "docker-flags-439000", held for 2.44938775s
	W1207 12:20:12.690761    3848 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-439000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-439000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:20:12.702470    3848 out.go:177] 
	W1207 12:20:12.706620    3848 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:20:12.706753    3848 out.go:239] * 
	* 
	W1207 12:20:12.709561    3848 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:20:12.720480    3848 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-439000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-439000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-439000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 89 (79.612417ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-439000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-439000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 89
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-439000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-439000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-439000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-439000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 89 (45.706667ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-439000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-439000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 89
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-439000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-439000\"\n"
panic.go:523: *** TestDockerFlags FAILED at 2023-12-07 12:20:12.864811 -0800 PST m=+1197.218674460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-439000 -n docker-flags-439000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-439000 -n docker-flags-439000: exit status 7 (30.97925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-439000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-439000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-439000
--- FAIL: TestDockerFlags (10.24s)

                                                
                                    
x
+
TestForceSystemdFlag (11.03s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-413000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-413000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.814325542s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-413000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-flag-413000 in cluster force-systemd-flag-413000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-413000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:19:56.833051    3821 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:19:56.833193    3821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:19:56.833196    3821 out.go:309] Setting ErrFile to fd 2...
	I1207 12:19:56.833199    3821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:19:56.833343    3821 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:19:56.834353    3821 out.go:303] Setting JSON to false
	I1207 12:19:56.850068    3821 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2967,"bootTime":1701977429,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:19:56.850143    3821 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:19:56.856332    3821 out.go:177] * [force-systemd-flag-413000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:19:56.862256    3821 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:19:56.866304    3821 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:19:56.862296    3821 notify.go:220] Checking for updates...
	I1207 12:19:56.869332    3821 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:19:56.872344    3821 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:19:56.875248    3821 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:19:56.878333    3821 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:19:56.881707    3821 config.go:182] Loaded profile config "force-systemd-env-557000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:19:56.881776    3821 config.go:182] Loaded profile config "multinode-554000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:19:56.881821    3821 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:19:56.886223    3821 out.go:177] * Using the qemu2 driver based on user configuration
	I1207 12:19:56.893235    3821 start.go:298] selected driver: qemu2
	I1207 12:19:56.893241    3821 start.go:902] validating driver "qemu2" against <nil>
	I1207 12:19:56.893246    3821 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:19:56.895632    3821 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 12:19:56.903291    3821 out.go:177] * Automatically selected the socket_vmnet network
	I1207 12:19:56.906380    3821 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1207 12:19:56.906412    3821 cni.go:84] Creating CNI manager for ""
	I1207 12:19:56.906418    3821 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 12:19:56.906423    3821 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1207 12:19:56.906427    3821 start_flags.go:323] config:
	{Name:force-systemd-flag-413000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-413000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:19:56.910947    3821 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:19:56.918244    3821 out.go:177] * Starting control plane node force-systemd-flag-413000 in cluster force-systemd-flag-413000
	I1207 12:19:56.922222    3821 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 12:19:56.922236    3821 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1207 12:19:56.922244    3821 cache.go:56] Caching tarball of preloaded images
	I1207 12:19:56.922296    3821 preload.go:174] Found /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 12:19:56.922302    3821 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1207 12:19:56.922355    3821 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/force-systemd-flag-413000/config.json ...
	I1207 12:19:56.922366    3821 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/force-systemd-flag-413000/config.json: {Name:mkda7a1a72d89e014f0c051b16f928f08be9aceb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:19:56.922576    3821 start.go:365] acquiring machines lock for force-systemd-flag-413000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:19:56.922614    3821 start.go:369] acquired machines lock for "force-systemd-flag-413000" in 26.25µs
	I1207 12:19:56.922627    3821 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-413000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:19:56.922655    3821 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:19:56.930301    3821 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1207 12:19:56.947860    3821 start.go:159] libmachine.API.Create for "force-systemd-flag-413000" (driver="qemu2")
	I1207 12:19:56.947890    3821 client.go:168] LocalClient.Create starting
	I1207 12:19:56.947959    3821 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:19:56.947992    3821 main.go:141] libmachine: Decoding PEM data...
	I1207 12:19:56.948002    3821 main.go:141] libmachine: Parsing certificate...
	I1207 12:19:56.948037    3821 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:19:56.948059    3821 main.go:141] libmachine: Decoding PEM data...
	I1207 12:19:56.948066    3821 main.go:141] libmachine: Parsing certificate...
	I1207 12:19:56.948428    3821 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:19:57.074407    3821 main.go:141] libmachine: Creating SSH key...
	I1207 12:19:57.185690    3821 main.go:141] libmachine: Creating Disk image...
	I1207 12:19:57.185696    3821 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:19:57.185863    3821 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-flag-413000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-flag-413000/disk.qcow2
	I1207 12:19:57.198075    3821 main.go:141] libmachine: STDOUT: 
	I1207 12:19:57.198094    3821 main.go:141] libmachine: STDERR: 
	I1207 12:19:57.198147    3821 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-flag-413000/disk.qcow2 +20000M
	I1207 12:19:57.208699    3821 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:19:57.208720    3821 main.go:141] libmachine: STDERR: 
	I1207 12:19:57.208738    3821 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-flag-413000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-flag-413000/disk.qcow2
	I1207 12:19:57.208758    3821 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:19:57.208789    3821 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-flag-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-flag-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-flag-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:88:29:ca:ea:5f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-flag-413000/disk.qcow2
	I1207 12:19:57.210442    3821 main.go:141] libmachine: STDOUT: 
	I1207 12:19:57.210456    3821 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:19:57.210472    3821 client.go:171] LocalClient.Create took 262.579208ms
	I1207 12:19:59.212721    3821 start.go:128] duration metric: createHost completed in 2.290070084s
	I1207 12:19:59.212791    3821 start.go:83] releasing machines lock for "force-systemd-flag-413000", held for 2.290210042s
	W1207 12:19:59.212847    3821 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:19:59.223077    3821 out.go:177] * Deleting "force-systemd-flag-413000" in qemu2 ...
	W1207 12:19:59.250363    3821 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:19:59.250391    3821 start.go:709] Will try again in 5 seconds ...
	I1207 12:20:04.252490    3821 start.go:365] acquiring machines lock for force-systemd-flag-413000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:20:05.204421    3821 start.go:369] acquired machines lock for "force-systemd-flag-413000" in 951.846417ms
	I1207 12:20:05.204580    3821 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-413000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-413000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:20:05.204755    3821 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:20:05.213282    3821 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1207 12:20:05.251646    3821 start.go:159] libmachine.API.Create for "force-systemd-flag-413000" (driver="qemu2")
	I1207 12:20:05.251689    3821 client.go:168] LocalClient.Create starting
	I1207 12:20:05.251833    3821 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:20:05.251898    3821 main.go:141] libmachine: Decoding PEM data...
	I1207 12:20:05.251919    3821 main.go:141] libmachine: Parsing certificate...
	I1207 12:20:05.251979    3821 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:20:05.252021    3821 main.go:141] libmachine: Decoding PEM data...
	I1207 12:20:05.252066    3821 main.go:141] libmachine: Parsing certificate...
	I1207 12:20:05.252646    3821 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:20:05.399993    3821 main.go:141] libmachine: Creating SSH key...
	I1207 12:20:05.540251    3821 main.go:141] libmachine: Creating Disk image...
	I1207 12:20:05.540257    3821 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:20:05.540476    3821 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-flag-413000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-flag-413000/disk.qcow2
	I1207 12:20:05.552888    3821 main.go:141] libmachine: STDOUT: 
	I1207 12:20:05.552907    3821 main.go:141] libmachine: STDERR: 
	I1207 12:20:05.552986    3821 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-flag-413000/disk.qcow2 +20000M
	I1207 12:20:05.563556    3821 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:20:05.563570    3821 main.go:141] libmachine: STDERR: 
	I1207 12:20:05.563584    3821 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-flag-413000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-flag-413000/disk.qcow2
	I1207 12:20:05.563592    3821 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:20:05.563636    3821 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-flag-413000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-flag-413000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-flag-413000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:6c:c6:c1:1a:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-flag-413000/disk.qcow2
	I1207 12:20:05.565321    3821 main.go:141] libmachine: STDOUT: 
	I1207 12:20:05.565334    3821 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:20:05.565364    3821 client.go:171] LocalClient.Create took 313.661208ms
	I1207 12:20:07.567504    3821 start.go:128] duration metric: createHost completed in 2.362762375s
	I1207 12:20:07.567550    3821 start.go:83] releasing machines lock for "force-systemd-flag-413000", held for 2.363142875s
	W1207 12:20:07.567894    3821 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-413000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:20:07.582677    3821 out.go:177] 
	W1207 12:20:07.590568    3821 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:20:07.590594    3821 out.go:239] * 
	* 
	W1207 12:20:07.593473    3821 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:20:07.603491    3821 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-413000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-413000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-413000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (79.663875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-flag-413000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-413000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2023-12-07 12:20:07.700807 -0800 PST m=+1192.054575251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-413000 -n force-systemd-flag-413000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-413000 -n force-systemd-flag-413000: exit status 7 (37.043458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-413000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-413000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-413000
--- FAIL: TestForceSystemdFlag (11.03s)

                                                
                                    
x
+
TestForceSystemdEnv (10.26s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-557000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-557000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.04601175s)

                                                
                                                
-- stdout --
	* [force-systemd-env-557000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-env-557000 in cluster force-systemd-env-557000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-557000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:19:52.524887    3789 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:19:52.525039    3789 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:19:52.525042    3789 out.go:309] Setting ErrFile to fd 2...
	I1207 12:19:52.525045    3789 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:19:52.525171    3789 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:19:52.526245    3789 out.go:303] Setting JSON to false
	I1207 12:19:52.542210    3789 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2963,"bootTime":1701977429,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:19:52.542300    3789 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:19:52.547644    3789 out.go:177] * [force-systemd-env-557000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:19:52.558640    3789 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:19:52.554694    3789 notify.go:220] Checking for updates...
	I1207 12:19:52.566677    3789 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:19:52.574641    3789 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:19:52.582670    3789 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:19:52.590615    3789 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:19:52.598577    3789 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1207 12:19:52.603117    3789 config.go:182] Loaded profile config "multinode-554000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:19:52.603167    3789 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:19:52.607616    3789 out.go:177] * Using the qemu2 driver based on user configuration
	I1207 12:19:52.614639    3789 start.go:298] selected driver: qemu2
	I1207 12:19:52.614646    3789 start.go:902] validating driver "qemu2" against <nil>
	I1207 12:19:52.614652    3789 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:19:52.616946    3789 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 12:19:52.620644    3789 out.go:177] * Automatically selected the socket_vmnet network
	I1207 12:19:52.624716    3789 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1207 12:19:52.624756    3789 cni.go:84] Creating CNI manager for ""
	I1207 12:19:52.624763    3789 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 12:19:52.624769    3789 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1207 12:19:52.624775    3789 start_flags.go:323] config:
	{Name:force-systemd-env-557000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-557000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:19:52.629156    3789 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:19:52.636545    3789 out.go:177] * Starting control plane node force-systemd-env-557000 in cluster force-systemd-env-557000
	I1207 12:19:52.641689    3789 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 12:19:52.641705    3789 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1207 12:19:52.641715    3789 cache.go:56] Caching tarball of preloaded images
	I1207 12:19:52.641773    3789 preload.go:174] Found /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 12:19:52.641778    3789 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1207 12:19:52.641851    3789 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/force-systemd-env-557000/config.json ...
	I1207 12:19:52.641862    3789 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/force-systemd-env-557000/config.json: {Name:mk2533c64656b0311009dab75b2204a8bca0383d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:19:52.642139    3789 start.go:365] acquiring machines lock for force-systemd-env-557000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:19:52.642178    3789 start.go:369] acquired machines lock for "force-systemd-env-557000" in 30.25µs
	I1207 12:19:52.642190    3789 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-557000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-557000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:19:52.642222    3789 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:19:52.650686    3789 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1207 12:19:52.666536    3789 start.go:159] libmachine.API.Create for "force-systemd-env-557000" (driver="qemu2")
	I1207 12:19:52.666564    3789 client.go:168] LocalClient.Create starting
	I1207 12:19:52.666623    3789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:19:52.666653    3789 main.go:141] libmachine: Decoding PEM data...
	I1207 12:19:52.666662    3789 main.go:141] libmachine: Parsing certificate...
	I1207 12:19:52.666698    3789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:19:52.666719    3789 main.go:141] libmachine: Decoding PEM data...
	I1207 12:19:52.666727    3789 main.go:141] libmachine: Parsing certificate...
	I1207 12:19:52.667049    3789 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:19:52.840385    3789 main.go:141] libmachine: Creating SSH key...
	I1207 12:19:53.092743    3789 main.go:141] libmachine: Creating Disk image...
	I1207 12:19:53.092759    3789 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:19:53.092969    3789 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-env-557000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-env-557000/disk.qcow2
	I1207 12:19:53.105872    3789 main.go:141] libmachine: STDOUT: 
	I1207 12:19:53.105910    3789 main.go:141] libmachine: STDERR: 
	I1207 12:19:53.105992    3789 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-env-557000/disk.qcow2 +20000M
	I1207 12:19:53.117372    3789 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:19:53.117392    3789 main.go:141] libmachine: STDERR: 
	I1207 12:19:53.117415    3789 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-env-557000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-env-557000/disk.qcow2
	I1207 12:19:53.117421    3789 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:19:53.117466    3789 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-env-557000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-env-557000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-env-557000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:d5:5f:d4:7e:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-env-557000/disk.qcow2
	I1207 12:19:53.119308    3789 main.go:141] libmachine: STDOUT: 
	I1207 12:19:53.119325    3789 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:19:53.119346    3789 client.go:171] LocalClient.Create took 452.784083ms
	I1207 12:19:55.121513    3789 start.go:128] duration metric: createHost completed in 2.479311208s
	I1207 12:19:55.121606    3789 start.go:83] releasing machines lock for "force-systemd-env-557000", held for 2.479462667s
	W1207 12:19:55.121705    3789 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:19:55.132085    3789 out.go:177] * Deleting "force-systemd-env-557000" in qemu2 ...
	W1207 12:19:55.156004    3789 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:19:55.156028    3789 start.go:709] Will try again in 5 seconds ...
	I1207 12:20:00.158182    3789 start.go:365] acquiring machines lock for force-systemd-env-557000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:20:00.158600    3789 start.go:369] acquired machines lock for "force-systemd-env-557000" in 300.666µs
	I1207 12:20:00.158760    3789 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-557000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-557000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:20:00.159036    3789 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:20:00.181381    3789 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1207 12:20:00.229433    3789 start.go:159] libmachine.API.Create for "force-systemd-env-557000" (driver="qemu2")
	I1207 12:20:00.229487    3789 client.go:168] LocalClient.Create starting
	I1207 12:20:00.229627    3789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:20:00.229688    3789 main.go:141] libmachine: Decoding PEM data...
	I1207 12:20:00.229714    3789 main.go:141] libmachine: Parsing certificate...
	I1207 12:20:00.229792    3789 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:20:00.229837    3789 main.go:141] libmachine: Decoding PEM data...
	I1207 12:20:00.229858    3789 main.go:141] libmachine: Parsing certificate...
	I1207 12:20:00.230439    3789 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:20:00.367794    3789 main.go:141] libmachine: Creating SSH key...
	I1207 12:20:00.468772    3789 main.go:141] libmachine: Creating Disk image...
	I1207 12:20:00.468782    3789 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:20:00.468957    3789 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-env-557000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-env-557000/disk.qcow2
	I1207 12:20:00.481287    3789 main.go:141] libmachine: STDOUT: 
	I1207 12:20:00.481348    3789 main.go:141] libmachine: STDERR: 
	I1207 12:20:00.481420    3789 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-env-557000/disk.qcow2 +20000M
	I1207 12:20:00.491890    3789 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:20:00.491942    3789 main.go:141] libmachine: STDERR: 
	I1207 12:20:00.491959    3789 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-env-557000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-env-557000/disk.qcow2
	I1207 12:20:00.491965    3789 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:20:00.492003    3789 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-env-557000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-env-557000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-env-557000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:2a:d9:ae:c3:ce -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/force-systemd-env-557000/disk.qcow2
	I1207 12:20:00.493654    3789 main.go:141] libmachine: STDOUT: 
	I1207 12:20:00.493747    3789 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:20:00.493757    3789 client.go:171] LocalClient.Create took 264.260083ms
	I1207 12:20:02.496050    3789 start.go:128] duration metric: createHost completed in 2.336974417s
	I1207 12:20:02.496146    3789 start.go:83] releasing machines lock for "force-systemd-env-557000", held for 2.337566667s
	W1207 12:20:02.496572    3789 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-557000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-557000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:20:02.509472    3789 out.go:177] 
	W1207 12:20:02.513463    3789 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:20:02.513491    3789 out.go:239] * 
	* 
	W1207 12:20:02.516003    3789 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:20:02.524467    3789 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-557000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-557000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-557000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (79.712333ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-env-557000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-557000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2023-12-07 12:20:02.622682 -0800 PST m=+1186.976356293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-557000 -n force-systemd-env-557000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-557000 -n force-systemd-env-557000: exit status 7 (35.281375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-557000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-557000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-557000
--- FAIL: TestForceSystemdEnv (10.26s)

                                                
                                    
x
+
TestErrorSpam/setup (19.66s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-890000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-890000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 --driver=qemu2 : exit status 90 (19.658704792s)

                                                
                                                
-- stdout --
	* [nospam-890000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node nospam-890000 in cluster nospam-890000
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	sudo journalctl --no-pager -u cri-docker.socket:
	-- stdout --
	-- Journal begins at Thu 2023-12-07 20:06:34 UTC, ends at Thu 2023-12-07 20:06:40 UTC. --
	Dec 07 20:06:34 minikube systemd[1]: Starting CRI Docker Socket for the API.
	Dec 07 20:06:34 minikube systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 07 20:06:38 nospam-890000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 07 20:06:38 nospam-890000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 07 20:06:38 nospam-890000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 07 20:06:38 nospam-890000 systemd[1]: Starting CRI Docker Socket for the API.
	Dec 07 20:06:38 nospam-890000 systemd[1]: Listening on CRI Docker Socket for the API.
	Dec 07 20:06:40 nospam-890000 systemd[1]: cri-docker.socket: Succeeded.
	Dec 07 20:06:40 nospam-890000 systemd[1]: Closed CRI Docker Socket for the API.
	Dec 07 20:06:40 nospam-890000 systemd[1]: Stopping CRI Docker Socket for the API.
	Dec 07 20:06:40 nospam-890000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
	Dec 07 20:06:40 nospam-890000 systemd[1]: Failed to listen on CRI Docker Socket for the API.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-890000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 --driver=qemu2 " failed: exit status 90
error_spam_test.go:96: unexpected stderr: "X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "Job failed. See \"journalctl -xe\" for details."
error_spam_test.go:96: unexpected stderr: "sudo journalctl --no-pager -u cri-docker.socket:"
error_spam_test.go:96: unexpected stderr: "-- stdout --"
error_spam_test.go:96: unexpected stderr: "-- Journal begins at Thu 2023-12-07 20:06:34 UTC, ends at Thu 2023-12-07 20:06:40 UTC. --"
error_spam_test.go:96: unexpected stderr: "Dec 07 20:06:34 minikube systemd[1]: Starting CRI Docker Socket for the API."
error_spam_test.go:96: unexpected stderr: "Dec 07 20:06:34 minikube systemd[1]: Listening on CRI Docker Socket for the API."
error_spam_test.go:96: unexpected stderr: "Dec 07 20:06:38 nospam-890000 systemd[1]: cri-docker.socket: Succeeded."
error_spam_test.go:96: unexpected stderr: "Dec 07 20:06:38 nospam-890000 systemd[1]: Closed CRI Docker Socket for the API."
error_spam_test.go:96: unexpected stderr: "Dec 07 20:06:38 nospam-890000 systemd[1]: Stopping CRI Docker Socket for the API."
error_spam_test.go:96: unexpected stderr: "Dec 07 20:06:38 nospam-890000 systemd[1]: Starting CRI Docker Socket for the API."
error_spam_test.go:96: unexpected stderr: "Dec 07 20:06:38 nospam-890000 systemd[1]: Listening on CRI Docker Socket for the API."
error_spam_test.go:96: unexpected stderr: "Dec 07 20:06:40 nospam-890000 systemd[1]: cri-docker.socket: Succeeded."
error_spam_test.go:96: unexpected stderr: "Dec 07 20:06:40 nospam-890000 systemd[1]: Closed CRI Docker Socket for the API."
error_spam_test.go:96: unexpected stderr: "Dec 07 20:06:40 nospam-890000 systemd[1]: Stopping CRI Docker Socket for the API."
error_spam_test.go:96: unexpected stderr: "Dec 07 20:06:40 nospam-890000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing."
error_spam_test.go:96: unexpected stderr: "Dec 07 20:06:40 nospam-890000 systemd[1]: Failed to listen on CRI Docker Socket for the API."
error_spam_test.go:96: unexpected stderr: "-- /stdout --"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-890000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
- MINIKUBE_LOCATION=17719
- KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting control plane node nospam-890000 in cluster nospam-890000
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job failed. See "journalctl -xe" for details.

                                                
                                                
sudo journalctl --no-pager -u cri-docker.socket:
-- stdout --
-- Journal begins at Thu 2023-12-07 20:06:34 UTC, ends at Thu 2023-12-07 20:06:40 UTC. --
Dec 07 20:06:34 minikube systemd[1]: Starting CRI Docker Socket for the API.
Dec 07 20:06:34 minikube systemd[1]: Listening on CRI Docker Socket for the API.
Dec 07 20:06:38 nospam-890000 systemd[1]: cri-docker.socket: Succeeded.
Dec 07 20:06:38 nospam-890000 systemd[1]: Closed CRI Docker Socket for the API.
Dec 07 20:06:38 nospam-890000 systemd[1]: Stopping CRI Docker Socket for the API.
Dec 07 20:06:38 nospam-890000 systemd[1]: Starting CRI Docker Socket for the API.
Dec 07 20:06:38 nospam-890000 systemd[1]: Listening on CRI Docker Socket for the API.
Dec 07 20:06:40 nospam-890000 systemd[1]: cri-docker.socket: Succeeded.
Dec 07 20:06:40 nospam-890000 systemd[1]: Closed CRI Docker Socket for the API.
Dec 07 20:06:40 nospam-890000 systemd[1]: Stopping CRI Docker Socket for the API.
Dec 07 20:06:40 nospam-890000 systemd[1]: cri-docker.socket: Socket service cri-docker.service already active, refusing.
Dec 07 20:06:40 nospam-890000 systemd[1]: Failed to listen on CRI Docker Socket for the API.

                                                
                                                
-- /stdout --
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (19.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (31.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1626: (dbg) Run:  kubectl --context functional-469000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-469000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-7799dfb7c6-5d4cn" [ae073c1b-04c5-4a77-bdf3-f979080d6e15] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-7799dfb7c6-5d4cn" [ae073c1b-04c5-4a77-bdf3-f979080d6e15] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.009833292s
functional_test.go:1648: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.105.4:32558
functional_test.go:1660: error fetching http://192.168.105.4:32558: Get "http://192.168.105.4:32558": dial tcp 192.168.105.4:32558: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32558: Get "http://192.168.105.4:32558": dial tcp 192.168.105.4:32558: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32558: Get "http://192.168.105.4:32558": dial tcp 192.168.105.4:32558: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32558: Get "http://192.168.105.4:32558": dial tcp 192.168.105.4:32558: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32558: Get "http://192.168.105.4:32558": dial tcp 192.168.105.4:32558: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32558: Get "http://192.168.105.4:32558": dial tcp 192.168.105.4:32558: connect: connection refused
functional_test.go:1660: error fetching http://192.168.105.4:32558: Get "http://192.168.105.4:32558": dial tcp 192.168.105.4:32558: connect: connection refused
functional_test.go:1680: failed to fetch http://192.168.105.4:32558: Get "http://192.168.105.4:32558": dial tcp 192.168.105.4:32558: connect: connection refused
functional_test.go:1597: service test failed - dumping debug information
functional_test.go:1598: -----------------------service failure post-mortem--------------------------------
functional_test.go:1601: (dbg) Run:  kubectl --context functional-469000 describe po hello-node-connect
functional_test.go:1605: hello-node pod describe:
Name:             hello-node-connect-7799dfb7c6-5d4cn
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-469000/192.168.105.4
Start Time:       Thu, 07 Dec 2023 12:11:14 -0800
Labels:           app=hello-node-connect
pod-template-hash=7799dfb7c6
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7799dfb7c6
Containers:
echoserver-arm:
Container ID:   docker://20b6f0c6c5290a937ca4d71ed6f44adb1a40a6b40d32121a8f860bf522a884ed
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Thu, 07 Dec 2023 12:11:28 -0800
Finished:     Thu, 07 Dec 2023 12:11:28 -0800
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zmzpr (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-zmzpr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  30s                default-scheduler  Successfully assigned default/hello-node-connect-7799dfb7c6-5d4cn to functional-469000
Normal   Pulled     16s (x3 over 29s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    16s (x3 over 29s)  kubelet            Created container echoserver-arm
Normal   Started    16s (x3 over 29s)  kubelet            Started container echoserver-arm
Warning  BackOff    4s (x3 over 28s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-7799dfb7c6-5d4cn_default(ae073c1b-04c5-4a77-bdf3-f979080d6e15)

                                                
                                                
functional_test.go:1607: (dbg) Run:  kubectl --context functional-469000 logs -l app=hello-node-connect
functional_test.go:1611: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1613: (dbg) Run:  kubectl --context functional-469000 describe svc hello-node-connect
functional_test.go:1617: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.104.13.20
IPs:                      10.104.13.20
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32558/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-469000 -n functional-469000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |-----------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                                        Args                                                        |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh       | functional-469000 ssh findmnt                                                                                      | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|           | -T /mount-9p | grep 9p                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-469000 ssh -- ls                                                                                        | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|           | -la /mount-9p                                                                                                      |                   |         |         |                     |                     |
	| ssh       | functional-469000 ssh cat                                                                                          | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|           | /mount-9p/test-1701979893588371000                                                                                 |                   |         |         |                     |                     |
	| ssh       | functional-469000 ssh stat                                                                                         | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|           | /mount-9p/created-by-test                                                                                          |                   |         |         |                     |                     |
	| ssh       | functional-469000 ssh stat                                                                                         | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|           | /mount-9p/created-by-pod                                                                                           |                   |         |         |                     |                     |
	| ssh       | functional-469000 ssh sudo                                                                                         | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|           | umount -f /mount-9p                                                                                                |                   |         |         |                     |                     |
	| ssh       | functional-469000 ssh findmnt                                                                                      | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST |                     |
	|           | -T /mount-9p | grep 9p                                                                                             |                   |         |         |                     |                     |
	| mount     | -p functional-469000                                                                                               | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port15898902/001:/mount-9p |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1 --port 46464                                                                                |                   |         |         |                     |                     |
	| ssh       | functional-469000 ssh findmnt                                                                                      | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST |                     |
	|           | -T /mount-9p | grep 9p                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-469000 ssh findmnt                                                                                      | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST |                     |
	|           | -T /mount-9p | grep 9p                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-469000 ssh findmnt                                                                                      | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|           | -T /mount-9p | grep 9p                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-469000 ssh -- ls                                                                                        | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|           | -la /mount-9p                                                                                                      |                   |         |         |                     |                     |
	| ssh       | functional-469000 ssh sudo                                                                                         | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST |                     |
	|           | umount -f /mount-9p                                                                                                |                   |         |         |                     |                     |
	| mount     | -p functional-469000                                                                                               | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1274315149/001:/mount2 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| mount     | -p functional-469000                                                                                               | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1274315149/001:/mount1 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| mount     | -p functional-469000                                                                                               | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST |                     |
	|           | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1274315149/001:/mount3 |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| ssh       | functional-469000 ssh findmnt                                                                                      | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST |                     |
	|           | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh       | functional-469000 ssh findmnt                                                                                      | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|           | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh       | functional-469000 ssh findmnt                                                                                      | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|           | -T /mount2                                                                                                         |                   |         |         |                     |                     |
	| ssh       | functional-469000 ssh findmnt                                                                                      | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|           | -T /mount3                                                                                                         |                   |         |         |                     |                     |
	| mount     | -p functional-469000                                                                                               | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST |                     |
	|           | --kill=true                                                                                                        |                   |         |         |                     |                     |
	| start     | -p functional-469000                                                                                               | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST |                     |
	|           | --dry-run --memory                                                                                                 |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                            |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                     |                   |         |         |                     |                     |
	| start     | -p functional-469000 --dry-run                                                                                     | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST |                     |
	|           | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                     |                   |         |         |                     |                     |
	| start     | -p functional-469000                                                                                               | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST |                     |
	|           | --dry-run --memory                                                                                                 |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                                                            |                   |         |         |                     |                     |
	|           | --driver=qemu2                                                                                                     |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                                                                 | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST |                     |
	|           | -p functional-469000                                                                                               |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	|-----------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 12:11:42
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 12:11:42.928153    2795 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:11:42.928324    2795 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:11:42.928328    2795 out.go:309] Setting ErrFile to fd 2...
	I1207 12:11:42.928330    2795 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:11:42.928457    2795 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:11:42.929847    2795 out.go:303] Setting JSON to false
	I1207 12:11:42.946745    2795 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2473,"bootTime":1701977429,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:11:42.946833    2795 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:11:42.952011    2795 out.go:177] * [functional-469000] minikube v1.32.0 sur Darwin 14.1.2 (arm64)
	I1207 12:11:42.958982    2795 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:11:42.962990    2795 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:11:42.959087    2795 notify.go:220] Checking for updates...
	I1207 12:11:42.968958    2795 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:11:42.971972    2795 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:11:42.978929    2795 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:11:42.986878    2795 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:11:42.991175    2795 config.go:182] Loaded profile config "functional-469000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:11:42.991435    2795 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:11:42.995795    2795 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1207 12:11:43.002938    2795 start.go:298] selected driver: qemu2
	I1207 12:11:43.002944    2795 start.go:902] validating driver "qemu2" against &{Name:functional-469000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:functional-469000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:11:43.002991    2795 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:11:43.010018    2795 out.go:177] 
	W1207 12:11:43.013947    2795 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1207 12:11:43.017954    2795 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-12-07 20:08:53 UTC, ends at Thu 2023-12-07 20:11:45 UTC. --
	Dec 07 20:11:36 functional-469000 dockerd[6570]: time="2023-12-07T20:11:36.738565234Z" level=warning msg="cleaning up after shim disconnected" id=40ec6bb60e470a8b6c62afae00c2b629fff1436d58d1e9cf91fb427457108ce8 namespace=moby
	Dec 07 20:11:36 functional-469000 dockerd[6570]: time="2023-12-07T20:11:36.738569817Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 07 20:11:37 functional-469000 dockerd[6570]: time="2023-12-07T20:11:37.915949000Z" level=info msg="shim disconnected" id=59c0703b4ebf067a62daa27c94d0aecaab72664e289bb16dfd2aad1c409e1532 namespace=moby
	Dec 07 20:11:37 functional-469000 dockerd[6563]: time="2023-12-07T20:11:37.916018459Z" level=info msg="ignoring event" container=59c0703b4ebf067a62daa27c94d0aecaab72664e289bb16dfd2aad1c409e1532 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:11:37 functional-469000 dockerd[6570]: time="2023-12-07T20:11:37.916206752Z" level=warning msg="cleaning up after shim disconnected" id=59c0703b4ebf067a62daa27c94d0aecaab72664e289bb16dfd2aad1c409e1532 namespace=moby
	Dec 07 20:11:37 functional-469000 dockerd[6570]: time="2023-12-07T20:11:37.916217419Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 07 20:11:38 functional-469000 dockerd[6570]: time="2023-12-07T20:11:38.344701283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 07 20:11:38 functional-469000 dockerd[6570]: time="2023-12-07T20:11:38.344731200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 07 20:11:38 functional-469000 dockerd[6570]: time="2023-12-07T20:11:38.344748034Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 07 20:11:38 functional-469000 dockerd[6570]: time="2023-12-07T20:11:38.344752534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 07 20:11:38 functional-469000 dockerd[6570]: time="2023-12-07T20:11:38.380054405Z" level=info msg="shim disconnected" id=40436fec0ae063787c9faab2421cc9c7fc2032301e8c87314fe3675f8828c22b namespace=moby
	Dec 07 20:11:38 functional-469000 dockerd[6563]: time="2023-12-07T20:11:38.380205157Z" level=info msg="ignoring event" container=40436fec0ae063787c9faab2421cc9c7fc2032301e8c87314fe3675f8828c22b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:11:38 functional-469000 dockerd[6570]: time="2023-12-07T20:11:38.380455243Z" level=warning msg="cleaning up after shim disconnected" id=40436fec0ae063787c9faab2421cc9c7fc2032301e8c87314fe3675f8828c22b namespace=moby
	Dec 07 20:11:38 functional-469000 dockerd[6570]: time="2023-12-07T20:11:38.380465076Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 07 20:11:43 functional-469000 dockerd[6570]: time="2023-12-07T20:11:43.974891996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 07 20:11:43 functional-469000 dockerd[6570]: time="2023-12-07T20:11:43.974910246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 07 20:11:43 functional-469000 dockerd[6570]: time="2023-12-07T20:11:43.974918913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 07 20:11:43 functional-469000 dockerd[6570]: time="2023-12-07T20:11:43.974923122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 07 20:11:43 functional-469000 dockerd[6570]: time="2023-12-07T20:11:43.974859579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 07 20:11:43 functional-469000 dockerd[6570]: time="2023-12-07T20:11:43.974888829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 07 20:11:43 functional-469000 dockerd[6570]: time="2023-12-07T20:11:43.974900954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 07 20:11:43 functional-469000 dockerd[6570]: time="2023-12-07T20:11:43.974905788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 07 20:11:44 functional-469000 cri-dockerd[6935]: time="2023-12-07T20:11:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8355ed2aedc4cc75c3417e08a067b2c0e514f65d6a6df811787057883650c7c6/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Dec 07 20:11:44 functional-469000 cri-dockerd[6935]: time="2023-12-07T20:11:44Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3cedd70e3d6aa1ebf8ac9745c67727a6d92edd3de8624d8d038b263d8432718b/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Dec 07 20:11:44 functional-469000 dockerd[6563]: time="2023-12-07T20:11:44.347478969Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	40436fec0ae06       72565bf5bbedf                                                                                         7 seconds ago        Exited              echoserver-arm            3                   61cbcf26f5c0e       hello-node-759d89bdcc-728zk
	40ec6bb60e470       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 seconds ago        Exited              mount-munger              0                   59c0703b4ebf0       busybox-mount
	20b6f0c6c5290       72565bf5bbedf                                                                                         17 seconds ago       Exited              echoserver-arm            2                   1940945e4d110       hello-node-connect-7799dfb7c6-5d4cn
	8fe966979cfa4       nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee                         18 seconds ago       Running             myfrontend                0                   2d8eddd149844       sp-pod
	a1e7dcb88822f       nginx@sha256:3923f8de8d2214b9490e68fd6ae63ea604deddd166df2755b788bef04848b9bc                         37 seconds ago       Running             nginx                     0                   c2cf87fbf1063       nginx-svc
	4332214ab1c97       97e04611ad434                                                                                         About a minute ago   Running             coredns                   2                   7209197e92cb3       coredns-5dd5756b68-kvgnr
	4047b8b7ab28c       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       2                   8e4b247649acd       storage-provisioner
	b49b5f32fc201       3ca3ca488cf13                                                                                         About a minute ago   Running             kube-proxy                2                   f6e9247e9a753       kube-proxy-hszpr
	e8e648d2c7a0d       05c284c929889                                                                                         About a minute ago   Running             kube-scheduler            2                   0a73ba1344424       kube-scheduler-functional-469000
	88a53e5b63ff4       9961cbceaf234                                                                                         About a minute ago   Running             kube-controller-manager   2                   4710fdf795d27       kube-controller-manager-functional-469000
	765063314ce01       9cdd6470f48c8                                                                                         About a minute ago   Running             etcd                      2                   60d041682f05f       etcd-functional-469000
	4343d2c1788b6       04b4c447bb9d4                                                                                         About a minute ago   Running             kube-apiserver            0                   0be0679ca2c9a       kube-apiserver-functional-469000
	40bb3113dfe2b       97e04611ad434                                                                                         2 minutes ago        Exited              coredns                   1                   f23aef0968ed1       coredns-5dd5756b68-kvgnr
	c4e629cd2f988       ba04bb24b9575                                                                                         2 minutes ago        Exited              storage-provisioner       1                   fd268a20f3495       storage-provisioner
	5a1cf0a2594ce       9961cbceaf234                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   ba778b61deead       kube-controller-manager-functional-469000
	0ad66d300a090       3ca3ca488cf13                                                                                         2 minutes ago        Exited              kube-proxy                1                   f9a9c460d3fba       kube-proxy-hszpr
	c72686f2c80ac       9cdd6470f48c8                                                                                         2 minutes ago        Exited              etcd                      1                   42184920b7c72       etcd-functional-469000
	b5f498ca7e78e       05c284c929889                                                                                         2 minutes ago        Exited              kube-scheduler            1                   a786344ae4c38       kube-scheduler-functional-469000
	
	* 
	* ==> coredns [40bb3113dfe2] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:33907 - 8663 "HINFO IN 4962123427991571165.2602825726465843698. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009060833s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [4332214ab1c9] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:32804 - 59689 "HINFO IN 8388112797257266972.4034134438450169684. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009006485s
	[INFO] 10.244.0.1:11351 - 38894 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000106454s
	[INFO] 10.244.0.1:58791 - 12287 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000090329s
	[INFO] 10.244.0.1:35839 - 20001 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000029249s
	[INFO] 10.244.0.1:64662 - 20237 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001825249s
	[INFO] 10.244.0.1:56061 - 43742 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000262447s
	[INFO] 10.244.0.1:12130 - 41361 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000329735s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-469000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-469000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c
	                    minikube.k8s.io/name=functional-469000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_07T12_09_10_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Dec 2023 20:09:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-469000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Dec 2023 20:11:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Dec 2023 20:11:26 +0000   Thu, 07 Dec 2023 20:09:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Dec 2023 20:11:26 +0000   Thu, 07 Dec 2023 20:09:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Dec 2023 20:11:26 +0000   Thu, 07 Dec 2023 20:09:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Dec 2023 20:11:26 +0000   Thu, 07 Dec 2023 20:09:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-469000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904696Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904696Ki
	  pods:               110
	System Info:
	  Machine ID:                 e8d7616c753f4310b8858239a3565a95
	  System UUID:                e8d7616c753f4310b8858239a3565a95
	  Boot ID:                    e9365043-a0e9-4867-8f07-29bdb912ee67
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-759d89bdcc-728zk                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  default                     hello-node-connect-7799dfb7c6-5d4cn           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  default                     nginx-svc                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  default                     sp-pod                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	  kube-system                 coredns-5dd5756b68-kvgnr                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m22s
	  kube-system                 etcd-functional-469000                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m35s
	  kube-system                 kube-apiserver-functional-469000              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	  kube-system                 kube-controller-manager-functional-469000     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m36s
	  kube-system                 kube-proxy-hszpr                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m22s
	  kube-system                 kube-scheduler-functional-469000              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m36s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m22s
	  kubernetes-dashboard        dashboard-metrics-scraper-7fd5cb4ddc-w6cb8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-vqcn2         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m21s                  kube-proxy       
	  Normal  Starting                 79s                    kube-proxy       
	  Normal  Starting                 2m                     kube-proxy       
	  Normal  Starting                 2m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m40s (x8 over 2m40s)  kubelet          Node functional-469000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m40s (x8 over 2m40s)  kubelet          Node functional-469000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m40s (x7 over 2m40s)  kubelet          Node functional-469000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m35s                  kubelet          Node functional-469000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m35s                  kubelet          Node functional-469000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m35s                  kubelet          Node functional-469000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m35s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m32s                  kubelet          Node functional-469000 status is now: NodeReady
	  Normal  RegisteredNode           2m23s                  node-controller  Node functional-469000 event: Registered Node functional-469000 in Controller
	  Normal  NodeNotReady             2m15s                  kubelet          Node functional-469000 status is now: NodeNotReady
	  Normal  RegisteredNode           108s                   node-controller  Node functional-469000 event: Registered Node functional-469000 in Controller
	  Normal  Starting                 83s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  83s (x8 over 83s)      kubelet          Node functional-469000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    83s (x8 over 83s)      kubelet          Node functional-469000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     83s (x7 over 83s)      kubelet          Node functional-469000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  83s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           68s                    node-controller  Node functional-469000 event: Registered Node functional-469000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.082771] systemd-fstab-generator[3654]: Ignoring "noauto" for root device
	[  +0.092549] systemd-fstab-generator[3667]: Ignoring "noauto" for root device
	[  +5.039015] kauditd_printk_skb: 69 callbacks suppressed
	[  +6.226487] systemd-fstab-generator[4238]: Ignoring "noauto" for root device
	[  +0.072983] systemd-fstab-generator[4249]: Ignoring "noauto" for root device
	[  +0.062842] systemd-fstab-generator[4260]: Ignoring "noauto" for root device
	[  +0.080043] systemd-fstab-generator[4271]: Ignoring "noauto" for root device
	[  +0.107699] systemd-fstab-generator[4345]: Ignoring "noauto" for root device
	[  +5.871663] kauditd_printk_skb: 94 callbacks suppressed
	[Dec 7 20:10] systemd-fstab-generator[6053]: Ignoring "noauto" for root device
	[  +0.147154] systemd-fstab-generator[6086]: Ignoring "noauto" for root device
	[  +0.108126] systemd-fstab-generator[6097]: Ignoring "noauto" for root device
	[  +0.100550] systemd-fstab-generator[6110]: Ignoring "noauto" for root device
	[  +5.122335] kauditd_printk_skb: 38 callbacks suppressed
	[  +6.312967] systemd-fstab-generator[6820]: Ignoring "noauto" for root device
	[  +0.091704] systemd-fstab-generator[6833]: Ignoring "noauto" for root device
	[  +0.084100] systemd-fstab-generator[6844]: Ignoring "noauto" for root device
	[  +0.074470] systemd-fstab-generator[6855]: Ignoring "noauto" for root device
	[  +0.101022] systemd-fstab-generator[6928]: Ignoring "noauto" for root device
	[  +1.160640] systemd-fstab-generator[7186]: Ignoring "noauto" for root device
	[  +3.588415] kauditd_printk_skb: 101 callbacks suppressed
	[ +27.222539] kauditd_printk_skb: 9 callbacks suppressed
	[  +1.380298] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Dec 7 20:11] kauditd_printk_skb: 3 callbacks suppressed
	[ +12.751835] kauditd_printk_skb: 6 callbacks suppressed
	
	* 
	* ==> etcd [765063314ce0] <==
	* {"level":"info","ts":"2023-12-07T20:10:23.235327Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-07T20:10:23.235336Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-07T20:10:23.235441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2023-12-07T20:10:23.235491Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-12-07T20:10:23.235534Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T20:10:23.235553Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T20:10:23.243497Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-12-07T20:10:23.246714Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-12-07T20:10:23.246736Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-12-07T20:10:23.243538Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-12-07T20:10:23.246756Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-12-07T20:10:24.18839Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2023-12-07T20:10:24.18855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-12-07T20:10:24.188623Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-12-07T20:10:24.18903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2023-12-07T20:10:24.189059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-12-07T20:10:24.189108Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2023-12-07T20:10:24.189183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-12-07T20:10:24.193581Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-469000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-07T20:10:24.193579Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-07T20:10:24.194001Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-07T20:10:24.194043Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-07T20:10:24.193622Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-07T20:10:24.19604Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-12-07T20:10:24.196046Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> etcd [c72686f2c80a] <==
	* {"level":"info","ts":"2023-12-07T20:09:43.204559Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-12-07T20:09:44.389262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2023-12-07T20:09:44.389339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-12-07T20:09:44.389366Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2023-12-07T20:09:44.389384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2023-12-07T20:09:44.389397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-12-07T20:09:44.389423Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2023-12-07T20:09:44.389439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-12-07T20:09:44.390976Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-469000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-07T20:09:44.390986Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-07T20:09:44.391113Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-07T20:09:44.392313Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-07T20:09:44.392462Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-07T20:09:44.39248Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-07T20:09:44.393414Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-12-07T20:10:09.28617Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-12-07T20:10:09.286203Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"functional-469000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2023-12-07T20:10:09.286252Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-07T20:10:09.286294Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-07T20:10:09.30016Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-12-07T20:10:09.300187Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2023-12-07T20:10:09.301369Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2023-12-07T20:10:09.303161Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-12-07T20:10:09.303193Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-12-07T20:10:09.303197Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"functional-469000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	* 
	* ==> kernel <==
	*  20:11:45 up 2 min,  0 users,  load average: 0.43, 0.27, 0.11
	Linux functional-469000 5.10.57 #1 SMP PREEMPT Tue Dec 5 16:07:42 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [4343d2c1788b] <==
	* I1207 20:10:24.861246       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1207 20:10:24.861299       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1207 20:10:24.861525       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E1207 20:10:24.862338       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1207 20:10:24.881089       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1207 20:10:24.881100       1 aggregator.go:166] initial CRD sync complete...
	I1207 20:10:24.881102       1 autoregister_controller.go:141] Starting autoregister controller
	I1207 20:10:24.881104       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1207 20:10:24.881107       1 cache.go:39] Caches are synced for autoregister controller
	I1207 20:10:25.764669       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1207 20:10:26.368962       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1207 20:10:26.372175       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1207 20:10:26.384124       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1207 20:10:26.392504       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 20:10:26.394728       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 20:10:37.517264       1 controller.go:624] quota admission added evaluator for: endpoints
	I1207 20:10:37.553368       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 20:10:47.415716       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.120.115"}
	I1207 20:10:52.927659       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1207 20:10:52.973097       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.110.119.148"}
	I1207 20:11:05.002099       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.111.206.0"}
	I1207 20:11:14.456650       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.104.13.20"}
	I1207 20:11:43.559155       1 controller.go:624] quota admission added evaluator for: namespaces
	I1207 20:11:43.651502       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.214.114"}
	I1207 20:11:43.664601       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.21.99"}
	
	* 
	* ==> kube-controller-manager [5a1cf0a2594c] <==
	* I1207 20:09:57.489107       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I1207 20:09:57.489116       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1207 20:09:57.490289       1 shared_informer.go:318] Caches are synced for service account
	I1207 20:09:57.490337       1 shared_informer.go:318] Caches are synced for deployment
	I1207 20:09:57.494058       1 shared_informer.go:318] Caches are synced for TTL
	I1207 20:09:57.494071       1 shared_informer.go:318] Caches are synced for HPA
	I1207 20:09:57.494081       1 shared_informer.go:318] Caches are synced for ephemeral
	I1207 20:09:57.500743       1 shared_informer.go:318] Caches are synced for PV protection
	I1207 20:09:57.500808       1 shared_informer.go:318] Caches are synced for crt configmap
	I1207 20:09:57.500848       1 shared_informer.go:318] Caches are synced for node
	I1207 20:09:57.500927       1 range_allocator.go:174] "Sending events to api server"
	I1207 20:09:57.500953       1 range_allocator.go:178] "Starting range CIDR allocator"
	I1207 20:09:57.500983       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I1207 20:09:57.501002       1 shared_informer.go:318] Caches are synced for cidrallocator
	I1207 20:09:57.508233       1 shared_informer.go:318] Caches are synced for expand
	I1207 20:09:57.509337       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1207 20:09:57.510542       1 shared_informer.go:318] Caches are synced for attach detach
	I1207 20:09:57.514981       1 shared_informer.go:318] Caches are synced for stateful set
	I1207 20:09:57.522556       1 shared_informer.go:318] Caches are synced for persistent volume
	I1207 20:09:57.544065       1 shared_informer.go:318] Caches are synced for PVC protection
	I1207 20:09:57.598926       1 shared_informer.go:318] Caches are synced for resource quota
	I1207 20:09:57.607050       1 shared_informer.go:318] Caches are synced for resource quota
	I1207 20:09:58.014360       1 shared_informer.go:318] Caches are synced for garbage collector
	I1207 20:09:58.014375       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1207 20:09:58.015422       1 shared_informer.go:318] Caches are synced for garbage collector
	
	* 
	* ==> kube-controller-manager [88a53e5b63ff] <==
	* E1207 20:11:43.599004       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1207 20:11:43.605042       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.970473ms"
	E1207 20:11:43.605056       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1207 20:11:43.605197       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="6.171535ms"
	E1207 20:11:43.605206       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1207 20:11:43.605366       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1207 20:11:43.609441       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="4.369584ms"
	E1207 20:11:43.609458       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1207 20:11:43.609478       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="3.363939ms"
	E1207 20:11:43.609482       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1207 20:11:43.609492       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1207 20:11:43.609498       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1207 20:11:43.612744       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="2.409713ms"
	E1207 20:11:43.612857       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1207 20:11:43.612901       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I1207 20:11:43.619607       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7fd5cb4ddc-w6cb8"
	I1207 20:11:43.622211       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="5.146807ms"
	I1207 20:11:43.627965       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="5.36952ms"
	I1207 20:11:43.628025       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="38.042µs"
	I1207 20:11:43.634182       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="11.625µs"
	I1207 20:11:43.636104       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-vqcn2"
	I1207 20:11:43.639944       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.004198ms"
	I1207 20:11:43.645833       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.796778ms"
	I1207 20:11:43.647150       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="17.959µs"
	I1207 20:11:43.652526       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.376µs"
	
	* 
	* ==> kube-proxy [0ad66d300a09] <==
	* I1207 20:09:43.272497       1 server_others.go:69] "Using iptables proxy"
	E1207 20:09:43.273242       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8441/api/v1/nodes/functional-469000": dial tcp 192.168.105.4:8441: connect: connection refused
	I1207 20:09:45.019760       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I1207 20:09:45.030658       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1207 20:09:45.030673       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1207 20:09:45.031317       1 server_others.go:152] "Using iptables Proxier"
	I1207 20:09:45.031337       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1207 20:09:45.031405       1 server.go:846] "Version info" version="v1.28.4"
	I1207 20:09:45.031413       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 20:09:45.031802       1 config.go:188] "Starting service config controller"
	I1207 20:09:45.031814       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1207 20:09:45.031824       1 config.go:97] "Starting endpoint slice config controller"
	I1207 20:09:45.031829       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1207 20:09:45.032123       1 config.go:315] "Starting node config controller"
	I1207 20:09:45.032126       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1207 20:09:45.132288       1 shared_informer.go:318] Caches are synced for node config
	I1207 20:09:45.132339       1 shared_informer.go:318] Caches are synced for service config
	I1207 20:09:45.132357       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-proxy [b49b5f32fc20] <==
	* I1207 20:10:25.923875       1 server_others.go:69] "Using iptables proxy"
	I1207 20:10:25.931298       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I1207 20:10:25.939602       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1207 20:10:25.939616       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1207 20:10:25.940204       1 server_others.go:152] "Using iptables Proxier"
	I1207 20:10:25.940251       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1207 20:10:25.940348       1 server.go:846] "Version info" version="v1.28.4"
	I1207 20:10:25.940356       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 20:10:25.940726       1 config.go:188] "Starting service config controller"
	I1207 20:10:25.940736       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1207 20:10:25.940845       1 config.go:97] "Starting endpoint slice config controller"
	I1207 20:10:25.940848       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1207 20:10:25.941477       1 config.go:315] "Starting node config controller"
	I1207 20:10:25.941482       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1207 20:10:26.041153       1 shared_informer.go:318] Caches are synced for service config
	I1207 20:10:26.041158       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1207 20:10:26.041710       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [b5f498ca7e78] <==
	* I1207 20:09:43.793878       1 serving.go:348] Generated self-signed cert in-memory
	W1207 20:09:45.002872       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1207 20:09:45.002905       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 20:09:45.002910       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1207 20:09:45.002913       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1207 20:09:45.010967       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1207 20:09:45.011063       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 20:09:45.014208       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1207 20:09:45.014290       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 20:09:45.014326       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1207 20:09:45.014349       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1207 20:09:45.115194       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1207 20:10:09.299994       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1207 20:10:09.300018       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E1207 20:10:09.300110       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [e8e648d2c7a0] <==
	* I1207 20:10:23.733214       1 serving.go:348] Generated self-signed cert in-memory
	W1207 20:10:24.793051       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1207 20:10:24.793062       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 20:10:24.793066       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1207 20:10:24.793069       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1207 20:10:24.823589       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1207 20:10:24.823604       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1207 20:10:24.824807       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1207 20:10:24.824856       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1207 20:10:24.824866       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1207 20:10:24.824873       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1207 20:10:24.925927       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-12-07 20:08:53 UTC, ends at Thu 2023-12-07 20:11:45 UTC. --
	Dec 07 20:11:28 functional-469000 kubelet[7192]: E1207 20:11:28.810772    7192 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-5d4cn_default(ae073c1b-04c5-4a77-bdf3-f979080d6e15)\"" pod="default/hello-node-connect-7799dfb7c6-5d4cn" podUID="ae073c1b-04c5-4a77-bdf3-f979080d6e15"
	Dec 07 20:11:34 functional-469000 kubelet[7192]: I1207 20:11:34.601202    7192 topology_manager.go:215] "Topology Admit Handler" podUID="0f768b03-0e91-4025-a50a-10fe95e5e1c0" podNamespace="default" podName="busybox-mount"
	Dec 07 20:11:34 functional-469000 kubelet[7192]: I1207 20:11:34.648857    7192 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5mnp\" (UniqueName: \"kubernetes.io/projected/0f768b03-0e91-4025-a50a-10fe95e5e1c0-kube-api-access-p5mnp\") pod \"busybox-mount\" (UID: \"0f768b03-0e91-4025-a50a-10fe95e5e1c0\") " pod="default/busybox-mount"
	Dec 07 20:11:34 functional-469000 kubelet[7192]: I1207 20:11:34.648882    7192 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/0f768b03-0e91-4025-a50a-10fe95e5e1c0-test-volume\") pod \"busybox-mount\" (UID: \"0f768b03-0e91-4025-a50a-10fe95e5e1c0\") " pod="default/busybox-mount"
	Dec 07 20:11:37 functional-469000 kubelet[7192]: I1207 20:11:37.977649    7192 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5mnp\" (UniqueName: \"kubernetes.io/projected/0f768b03-0e91-4025-a50a-10fe95e5e1c0-kube-api-access-p5mnp\") pod \"0f768b03-0e91-4025-a50a-10fe95e5e1c0\" (UID: \"0f768b03-0e91-4025-a50a-10fe95e5e1c0\") "
	Dec 07 20:11:37 functional-469000 kubelet[7192]: I1207 20:11:37.977671    7192 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/0f768b03-0e91-4025-a50a-10fe95e5e1c0-test-volume\") pod \"0f768b03-0e91-4025-a50a-10fe95e5e1c0\" (UID: \"0f768b03-0e91-4025-a50a-10fe95e5e1c0\") "
	Dec 07 20:11:37 functional-469000 kubelet[7192]: I1207 20:11:37.977703    7192 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0f768b03-0e91-4025-a50a-10fe95e5e1c0-test-volume" (OuterVolumeSpecName: "test-volume") pod "0f768b03-0e91-4025-a50a-10fe95e5e1c0" (UID: "0f768b03-0e91-4025-a50a-10fe95e5e1c0"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Dec 07 20:11:37 functional-469000 kubelet[7192]: I1207 20:11:37.980331    7192 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f768b03-0e91-4025-a50a-10fe95e5e1c0-kube-api-access-p5mnp" (OuterVolumeSpecName: "kube-api-access-p5mnp") pod "0f768b03-0e91-4025-a50a-10fe95e5e1c0" (UID: "0f768b03-0e91-4025-a50a-10fe95e5e1c0"). InnerVolumeSpecName "kube-api-access-p5mnp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 07 20:11:38 functional-469000 kubelet[7192]: I1207 20:11:38.078563    7192 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-p5mnp\" (UniqueName: \"kubernetes.io/projected/0f768b03-0e91-4025-a50a-10fe95e5e1c0-kube-api-access-p5mnp\") on node \"functional-469000\" DevicePath \"\""
	Dec 07 20:11:38 functional-469000 kubelet[7192]: I1207 20:11:38.078575    7192 reconciler_common.go:300] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/0f768b03-0e91-4025-a50a-10fe95e5e1c0-test-volume\") on node \"functional-469000\" DevicePath \"\""
	Dec 07 20:11:38 functional-469000 kubelet[7192]: I1207 20:11:38.296078    7192 scope.go:117] "RemoveContainer" containerID="d5d1029aab9fdd2c8e48d4c049be99944b49c5757314a7d845a32f2222061aa0"
	Dec 07 20:11:38 functional-469000 kubelet[7192]: I1207 20:11:38.865441    7192 scope.go:117] "RemoveContainer" containerID="d5d1029aab9fdd2c8e48d4c049be99944b49c5757314a7d845a32f2222061aa0"
	Dec 07 20:11:38 functional-469000 kubelet[7192]: I1207 20:11:38.865619    7192 scope.go:117] "RemoveContainer" containerID="40436fec0ae063787c9faab2421cc9c7fc2032301e8c87314fe3675f8828c22b"
	Dec 07 20:11:38 functional-469000 kubelet[7192]: E1207 20:11:38.865711    7192 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 40s restarting failed container=echoserver-arm pod=hello-node-759d89bdcc-728zk_default(edf545b7-09a0-4d00-8955-0a840c08c06b)\"" pod="default/hello-node-759d89bdcc-728zk" podUID="edf545b7-09a0-4d00-8955-0a840c08c06b"
	Dec 07 20:11:38 functional-469000 kubelet[7192]: I1207 20:11:38.873436    7192 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59c0703b4ebf067a62daa27c94d0aecaab72664e289bb16dfd2aad1c409e1532"
	Dec 07 20:11:40 functional-469000 kubelet[7192]: I1207 20:11:40.296065    7192 scope.go:117] "RemoveContainer" containerID="20b6f0c6c5290a937ca4d71ed6f44adb1a40a6b40d32121a8f860bf522a884ed"
	Dec 07 20:11:40 functional-469000 kubelet[7192]: E1207 20:11:40.296168    7192 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-7799dfb7c6-5d4cn_default(ae073c1b-04c5-4a77-bdf3-f979080d6e15)\"" pod="default/hello-node-connect-7799dfb7c6-5d4cn" podUID="ae073c1b-04c5-4a77-bdf3-f979080d6e15"
	Dec 07 20:11:43 functional-469000 kubelet[7192]: I1207 20:11:43.624077    7192 topology_manager.go:215] "Topology Admit Handler" podUID="73a126a8-0303-4a5c-a6c9-3392b75056ee" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-7fd5cb4ddc-w6cb8"
	Dec 07 20:11:43 functional-469000 kubelet[7192]: E1207 20:11:43.624122    7192 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0f768b03-0e91-4025-a50a-10fe95e5e1c0" containerName="mount-munger"
	Dec 07 20:11:43 functional-469000 kubelet[7192]: I1207 20:11:43.624139    7192 memory_manager.go:346] "RemoveStaleState removing state" podUID="0f768b03-0e91-4025-a50a-10fe95e5e1c0" containerName="mount-munger"
	Dec 07 20:11:43 functional-469000 kubelet[7192]: I1207 20:11:43.643354    7192 topology_manager.go:215] "Topology Admit Handler" podUID="7c7c4389-449d-4ae1-b358-27032b54122e" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-vqcn2"
	Dec 07 20:11:43 functional-469000 kubelet[7192]: I1207 20:11:43.712072    7192 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z9qh\" (UniqueName: \"kubernetes.io/projected/73a126a8-0303-4a5c-a6c9-3392b75056ee-kube-api-access-5z9qh\") pod \"dashboard-metrics-scraper-7fd5cb4ddc-w6cb8\" (UID: \"73a126a8-0303-4a5c-a6c9-3392b75056ee\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-w6cb8"
	Dec 07 20:11:43 functional-469000 kubelet[7192]: I1207 20:11:43.712099    7192 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/73a126a8-0303-4a5c-a6c9-3392b75056ee-tmp-volume\") pod \"dashboard-metrics-scraper-7fd5cb4ddc-w6cb8\" (UID: \"73a126a8-0303-4a5c-a6c9-3392b75056ee\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-w6cb8"
	Dec 07 20:11:43 functional-469000 kubelet[7192]: I1207 20:11:43.712110    7192 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7c7c4389-449d-4ae1-b358-27032b54122e-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-vqcn2\" (UID: \"7c7c4389-449d-4ae1-b358-27032b54122e\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-vqcn2"
	Dec 07 20:11:43 functional-469000 kubelet[7192]: I1207 20:11:43.712136    7192 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w5jn\" (UniqueName: \"kubernetes.io/projected/7c7c4389-449d-4ae1-b358-27032b54122e-kube-api-access-5w5jn\") pod \"kubernetes-dashboard-8694d4445c-vqcn2\" (UID: \"7c7c4389-449d-4ae1-b358-27032b54122e\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-vqcn2"
	
	* 
	* ==> storage-provisioner [4047b8b7ab28] <==
	* I1207 20:10:25.913749       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1207 20:10:25.920352       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 20:10:25.920370       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1207 20:10:43.319077       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 20:10:43.319229       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-469000_5dd26f06-1231-4487-870c-89f0d2af0690!
	I1207 20:10:43.319801       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"970b49df-fe48-4070-bd64-9f2c2f218a2d", APIVersion:"v1", ResourceVersion:"594", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-469000_5dd26f06-1231-4487-870c-89f0d2af0690 became leader
	I1207 20:10:43.431610       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-469000_5dd26f06-1231-4487-870c-89f0d2af0690!
	I1207 20:11:11.610549       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1207 20:11:11.610964       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    99f1488d-e93d-4c39-9ce6-e3b39fd1d719 347 0 2023-12-07 20:09:23 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-12-07 20:09:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-fac82163-67cd-46f7-85d1-b5e72c44141a &PersistentVolumeClaim{ObjectMeta:{myclaim  default  fac82163-67cd-46f7-85d1-b5e72c44141a 700 0 2023-12-07 20:11:11 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-12-07 20:11:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-12-07 20:11:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1207 20:11:11.611319       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-fac82163-67cd-46f7-85d1-b5e72c44141a" provisioned
	I1207 20:11:11.611357       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1207 20:11:11.611384       1 volume_store.go:212] Trying to save persistentvolume "pvc-fac82163-67cd-46f7-85d1-b5e72c44141a"
	I1207 20:11:11.611977       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"fac82163-67cd-46f7-85d1-b5e72c44141a", APIVersion:"v1", ResourceVersion:"700", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1207 20:11:11.617332       1 volume_store.go:219] persistentvolume "pvc-fac82163-67cd-46f7-85d1-b5e72c44141a" saved
	I1207 20:11:11.618275       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"fac82163-67cd-46f7-85d1-b5e72c44141a", APIVersion:"v1", ResourceVersion:"700", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-fac82163-67cd-46f7-85d1-b5e72c44141a
	
	* 
	* ==> storage-provisioner [c4e629cd2f98] <==
	* I1207 20:09:43.869784       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1207 20:09:45.021633       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 20:09:45.021655       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1207 20:10:02.411209       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 20:10:02.411324       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"970b49df-fe48-4070-bd64-9f2c2f218a2d", APIVersion:"v1", ResourceVersion:"495", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-469000_2829890a-a10b-4892-9837-31f9b382da00 became leader
	I1207 20:10:02.411339       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-469000_2829890a-a10b-4892-9837-31f9b382da00!
	I1207 20:10:02.511878       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-469000_2829890a-a10b-4892-9837-31f9b382da00!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-469000 -n functional-469000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-469000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount dashboard-metrics-scraper-7fd5cb4ddc-w6cb8 kubernetes-dashboard-8694d4445c-vqcn2
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-469000 describe pod busybox-mount dashboard-metrics-scraper-7fd5cb4ddc-w6cb8 kubernetes-dashboard-8694d4445c-vqcn2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-469000 describe pod busybox-mount dashboard-metrics-scraper-7fd5cb4ddc-w6cb8 kubernetes-dashboard-8694d4445c-vqcn2: exit status 1 (43.241291ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-469000/192.168.105.4
	Start Time:       Thu, 07 Dec 2023 12:11:34 -0800
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://40ec6bb60e470a8b6c62afae00c2b629fff1436d58d1e9cf91fb427457108ce8
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Thu, 07 Dec 2023 12:11:36 -0800
	      Finished:     Thu, 07 Dec 2023 12:11:36 -0800
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p5mnp (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-p5mnp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  11s   default-scheduler  Successfully assigned default/busybox-mount to functional-469000
	  Normal  Pulling    10s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9s    kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.577s (1.577s including waiting)
	  Normal  Created    9s    kubelet            Created container mount-munger
	  Normal  Started    9s    kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-7fd5cb4ddc-w6cb8" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-vqcn2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-469000 describe pod busybox-mount dashboard-metrics-scraper-7fd5cb4ddc-w6cb8 kubernetes-dashboard-8694d4445c-vqcn2: exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (31.40s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.08s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-203000
image_test.go:105: failed to pass build-args with args: "out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-203000" : 
-- stdout --
	Sending build context to Docker daemon  2.048kB
	Step 1/5 : FROM gcr.io/google-containers/alpine-with-bash:1.0
	 ---> 822c13824dc2
	Step 2/5 : ARG ENV_A
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in dfbdf53e5ecd
	Removing intermediate container dfbdf53e5ecd
	 ---> 0af9d3774c03
	Step 3/5 : ARG ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in edf7f2599e4d
	Removing intermediate container edf7f2599e4d
	 ---> e0a649121a44
	Step 4/5 : RUN echo "test-build-arg" $ENV_A $ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in a64f0dd28b61
	exec /bin/sh: exec format error
	

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            Install the buildx component to build images with BuildKit:
	            https://docs.docker.com/go/buildx/
	
	The command '/bin/sh -c echo "test-build-arg" $ENV_A $ENV_B' returned a non-zero code: 1

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-203000 -n image-203000
helpers_test.go:244: <<< TestImageBuild/serial/BuildWithBuildArg FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestImageBuild/serial/BuildWithBuildArg]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p image-203000 logs -n 25
helpers_test.go:252: TestImageBuild/serial/BuildWithBuildArg logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                                        Args                                                        |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| mount          | -p functional-469000                                                                                               | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST |                     |
	|                | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1274315149/001:/mount1 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| mount          | -p functional-469000                                                                                               | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST |                     |
	|                | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1274315149/001:/mount3 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| ssh            | functional-469000 ssh findmnt                                                                                      | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST |                     |
	|                | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh            | functional-469000 ssh findmnt                                                                                      | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|                | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh            | functional-469000 ssh findmnt                                                                                      | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|                | -T /mount2                                                                                                         |                   |         |         |                     |                     |
	| ssh            | functional-469000 ssh findmnt                                                                                      | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|                | -T /mount3                                                                                                         |                   |         |         |                     |                     |
	| mount          | -p functional-469000                                                                                               | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST |                     |
	|                | --kill=true                                                                                                        |                   |         |         |                     |                     |
	| start          | -p functional-469000                                                                                               | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST |                     |
	|                | --dry-run --memory                                                                                                 |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                                                                            |                   |         |         |                     |                     |
	|                | --driver=qemu2                                                                                                     |                   |         |         |                     |                     |
	| start          | -p functional-469000 --dry-run                                                                                     | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST |                     |
	|                | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	|                | --driver=qemu2                                                                                                     |                   |         |         |                     |                     |
	| start          | -p functional-469000                                                                                               | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST |                     |
	|                | --dry-run --memory                                                                                                 |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                                                                            |                   |         |         |                     |                     |
	|                | --driver=qemu2                                                                                                     |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                                                                                                 | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|                | -p functional-469000                                                                                               |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| update-context | functional-469000                                                                                                  | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|                | update-context                                                                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                             |                   |         |         |                     |                     |
	| update-context | functional-469000                                                                                                  | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|                | update-context                                                                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                             |                   |         |         |                     |                     |
	| update-context | functional-469000                                                                                                  | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|                | update-context                                                                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                             |                   |         |         |                     |                     |
	| image          | functional-469000                                                                                                  | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|                | image ls --format short                                                                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                  |                   |         |         |                     |                     |
	| image          | functional-469000                                                                                                  | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|                | image ls --format yaml                                                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                  |                   |         |         |                     |                     |
	| ssh            | functional-469000 ssh pgrep                                                                                        | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST |                     |
	|                | buildkitd                                                                                                          |                   |         |         |                     |                     |
	| image          | functional-469000 image build -t                                                                                   | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|                | localhost/my-image:functional-469000                                                                               |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                                                                   |                   |         |         |                     |                     |
	| image          | functional-469000 image ls                                                                                         | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	| image          | functional-469000                                                                                                  | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|                | image ls --format json                                                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                  |                   |         |         |                     |                     |
	| image          | functional-469000                                                                                                  | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|                | image ls --format table                                                                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                  |                   |         |         |                     |                     |
	| delete         | -p functional-469000                                                                                               | functional-469000 | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	| start          | -p image-203000 --driver=qemu2                                                                                     | image-203000      | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:12 PST |
	|                |                                                                                                                    |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                                                                                                | image-203000      | jenkins | v1.32.0 | 07 Dec 23 12:12 PST | 07 Dec 23 12:12 PST |
	|                | ./testdata/image-build/test-normal                                                                                 |                   |         |         |                     |                     |
	|                | -p image-203000                                                                                                    |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                                                                                                | image-203000      | jenkins | v1.32.0 | 07 Dec 23 12:12 PST | 07 Dec 23 12:12 PST |
	|                | --build-opt=build-arg=ENV_A=test_env_str                                                                           |                   |         |         |                     |                     |
	|                | --build-opt=no-cache                                                                                               |                   |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p                                                                                 |                   |         |         |                     |                     |
	|                | image-203000                                                                                                       |                   |         |         |                     |                     |
	|----------------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 12:11:53
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 12:11:53.362805    2857 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:11:53.362957    2857 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:11:53.362958    2857 out.go:309] Setting ErrFile to fd 2...
	I1207 12:11:53.362960    2857 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:11:53.363089    2857 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:11:53.364118    2857 out.go:303] Setting JSON to false
	I1207 12:11:53.381102    2857 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2484,"bootTime":1701977429,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:11:53.381200    2857 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:11:53.385099    2857 out.go:177] * [image-203000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:11:53.392059    2857 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:11:53.392091    2857 notify.go:220] Checking for updates...
	I1207 12:11:53.395882    2857 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:11:53.399009    2857 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:11:53.402040    2857 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:11:53.405035    2857 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:11:53.408112    2857 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:11:53.411190    2857 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:11:53.415043    2857 out.go:177] * Using the qemu2 driver based on user configuration
	I1207 12:11:53.421980    2857 start.go:298] selected driver: qemu2
	I1207 12:11:53.421985    2857 start.go:902] validating driver "qemu2" against <nil>
	I1207 12:11:53.421989    2857 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:11:53.422039    2857 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 12:11:53.425059    2857 out.go:177] * Automatically selected the socket_vmnet network
	I1207 12:11:53.430665    2857 start_flags.go:394] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1207 12:11:53.430750    2857 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1207 12:11:53.430792    2857 cni.go:84] Creating CNI manager for ""
	I1207 12:11:53.430798    2857 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 12:11:53.430803    2857 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1207 12:11:53.430814    2857 start_flags.go:323] config:
	{Name:image-203000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:image-203000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:11:53.435573    2857 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:11:53.442862    2857 out.go:177] * Starting control plane node image-203000 in cluster image-203000
	I1207 12:11:53.447030    2857 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 12:11:53.447044    2857 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1207 12:11:53.447053    2857 cache.go:56] Caching tarball of preloaded images
	I1207 12:11:53.447108    2857 preload.go:174] Found /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 12:11:53.447112    2857 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1207 12:11:53.447296    2857 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/image-203000/config.json ...
	I1207 12:11:53.447305    2857 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/image-203000/config.json: {Name:mk9cc56970c97238cdf2fe7fae55f2d98433641f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:11:53.447486    2857 start.go:365] acquiring machines lock for image-203000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:11:53.447512    2857 start.go:369] acquired machines lock for "image-203000" in 23.25µs
	I1207 12:11:53.447522    2857 start.go:93] Provisioning new machine with config: &{Name:image-203000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:image-203000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:11:53.447549    2857 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:11:53.455015    2857 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1207 12:11:53.478402    2857 start.go:159] libmachine.API.Create for "image-203000" (driver="qemu2")
	I1207 12:11:53.478426    2857 client.go:168] LocalClient.Create starting
	I1207 12:11:53.478487    2857 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:11:53.478522    2857 main.go:141] libmachine: Decoding PEM data...
	I1207 12:11:53.478533    2857 main.go:141] libmachine: Parsing certificate...
	I1207 12:11:53.478570    2857 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:11:53.478590    2857 main.go:141] libmachine: Decoding PEM data...
	I1207 12:11:53.478598    2857 main.go:141] libmachine: Parsing certificate...
	I1207 12:11:53.478923    2857 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:11:53.647038    2857 main.go:141] libmachine: Creating SSH key...
	I1207 12:11:53.859572    2857 main.go:141] libmachine: Creating Disk image...
	I1207 12:11:53.859577    2857 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:11:53.859754    2857 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/image-203000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/image-203000/disk.qcow2
	I1207 12:11:53.881169    2857 main.go:141] libmachine: STDOUT: 
	I1207 12:11:53.881184    2857 main.go:141] libmachine: STDERR: 
	I1207 12:11:53.881249    2857 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/image-203000/disk.qcow2 +20000M
	I1207 12:11:53.892023    2857 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:11:53.892035    2857 main.go:141] libmachine: STDERR: 
	I1207 12:11:53.892063    2857 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/image-203000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/image-203000/disk.qcow2
	I1207 12:11:53.892069    2857 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:11:53.892112    2857 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/image-203000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/image-203000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/image-203000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:a7:87:db:7c:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/image-203000/disk.qcow2
	I1207 12:11:53.937498    2857 main.go:141] libmachine: STDOUT: 
	I1207 12:11:53.937521    2857 main.go:141] libmachine: STDERR: 
	I1207 12:11:53.937525    2857 main.go:141] libmachine: Attempt 0
	I1207 12:11:53.937539    2857 main.go:141] libmachine: Searching for e2:a7:87:db:7c:3a in /var/db/dhcpd_leases ...
	I1207 12:11:53.937614    2857 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1207 12:11:53.937629    2857 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:65:79:21:45:fe ID:1,a:65:79:21:45:fe Lease:0x657377d5}
	I1207 12:11:53.937639    2857 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ce:6f:7f:8a:7d:64 ID:1,ce:6f:7f:8a:7d:64 Lease:0x65722646}
	I1207 12:11:53.937643    2857 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:12:b5:83:6f:32:61 ID:1,12:b5:83:6f:32:61 Lease:0x657225bb}
	I1207 12:11:53.937651    2857 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x657375c3}
	I1207 12:11:55.939771    2857 main.go:141] libmachine: Attempt 1
	I1207 12:11:55.939820    2857 main.go:141] libmachine: Searching for e2:a7:87:db:7c:3a in /var/db/dhcpd_leases ...
	I1207 12:11:55.940143    2857 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1207 12:11:55.940191    2857 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:65:79:21:45:fe ID:1,a:65:79:21:45:fe Lease:0x657377d5}
	I1207 12:11:55.940253    2857 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ce:6f:7f:8a:7d:64 ID:1,ce:6f:7f:8a:7d:64 Lease:0x65722646}
	I1207 12:11:55.940280    2857 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:12:b5:83:6f:32:61 ID:1,12:b5:83:6f:32:61 Lease:0x657225bb}
	I1207 12:11:55.940306    2857 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x657375c3}
	I1207 12:11:57.941330    2857 main.go:141] libmachine: Attempt 2
	I1207 12:11:57.941369    2857 main.go:141] libmachine: Searching for e2:a7:87:db:7c:3a in /var/db/dhcpd_leases ...
	I1207 12:11:57.941618    2857 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1207 12:11:57.941658    2857 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:65:79:21:45:fe ID:1,a:65:79:21:45:fe Lease:0x657377d5}
	I1207 12:11:57.941684    2857 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ce:6f:7f:8a:7d:64 ID:1,ce:6f:7f:8a:7d:64 Lease:0x65722646}
	I1207 12:11:57.941712    2857 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:12:b5:83:6f:32:61 ID:1,12:b5:83:6f:32:61 Lease:0x657225bb}
	I1207 12:11:57.941737    2857 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x657375c3}
	I1207 12:11:59.943911    2857 main.go:141] libmachine: Attempt 3
	I1207 12:11:59.943983    2857 main.go:141] libmachine: Searching for e2:a7:87:db:7c:3a in /var/db/dhcpd_leases ...
	I1207 12:11:59.944092    2857 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1207 12:11:59.944110    2857 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:65:79:21:45:fe ID:1,a:65:79:21:45:fe Lease:0x657377d5}
	I1207 12:11:59.944116    2857 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ce:6f:7f:8a:7d:64 ID:1,ce:6f:7f:8a:7d:64 Lease:0x65722646}
	I1207 12:11:59.944120    2857 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:12:b5:83:6f:32:61 ID:1,12:b5:83:6f:32:61 Lease:0x657225bb}
	I1207 12:11:59.944126    2857 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x657375c3}
	I1207 12:12:01.945581    2857 main.go:141] libmachine: Attempt 4
	I1207 12:12:01.945586    2857 main.go:141] libmachine: Searching for e2:a7:87:db:7c:3a in /var/db/dhcpd_leases ...
	I1207 12:12:01.945641    2857 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1207 12:12:01.945645    2857 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:65:79:21:45:fe ID:1,a:65:79:21:45:fe Lease:0x657377d5}
	I1207 12:12:01.945649    2857 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ce:6f:7f:8a:7d:64 ID:1,ce:6f:7f:8a:7d:64 Lease:0x65722646}
	I1207 12:12:01.945653    2857 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:12:b5:83:6f:32:61 ID:1,12:b5:83:6f:32:61 Lease:0x657225bb}
	I1207 12:12:01.945658    2857 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x657375c3}
	I1207 12:12:03.947684    2857 main.go:141] libmachine: Attempt 5
	I1207 12:12:03.947689    2857 main.go:141] libmachine: Searching for e2:a7:87:db:7c:3a in /var/db/dhcpd_leases ...
	I1207 12:12:03.947725    2857 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1207 12:12:03.947729    2857 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:65:79:21:45:fe ID:1,a:65:79:21:45:fe Lease:0x657377d5}
	I1207 12:12:03.947734    2857 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ce:6f:7f:8a:7d:64 ID:1,ce:6f:7f:8a:7d:64 Lease:0x65722646}
	I1207 12:12:03.947738    2857 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:12:b5:83:6f:32:61 ID:1,12:b5:83:6f:32:61 Lease:0x657225bb}
	I1207 12:12:03.947742    2857 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x657375c3}
	I1207 12:12:05.949773    2857 main.go:141] libmachine: Attempt 6
	I1207 12:12:05.949792    2857 main.go:141] libmachine: Searching for e2:a7:87:db:7c:3a in /var/db/dhcpd_leases ...
	I1207 12:12:05.949855    2857 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I1207 12:12:05.949863    2857 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:65:79:21:45:fe ID:1,a:65:79:21:45:fe Lease:0x657377d5}
	I1207 12:12:05.949867    2857 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ce:6f:7f:8a:7d:64 ID:1,ce:6f:7f:8a:7d:64 Lease:0x65722646}
	I1207 12:12:05.949874    2857 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:12:b5:83:6f:32:61 ID:1,12:b5:83:6f:32:61 Lease:0x657225bb}
	I1207 12:12:05.949878    2857 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x657375c3}
	I1207 12:12:07.951913    2857 main.go:141] libmachine: Attempt 7
	I1207 12:12:07.951941    2857 main.go:141] libmachine: Searching for e2:a7:87:db:7c:3a in /var/db/dhcpd_leases ...
	I1207 12:12:07.952021    2857 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1207 12:12:07.952032    2857 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:e2:a7:87:db:7c:3a ID:1,e2:a7:87:db:7c:3a Lease:0x65737896}
	I1207 12:12:07.952034    2857 main.go:141] libmachine: Found match: e2:a7:87:db:7c:3a
	I1207 12:12:07.952045    2857 main.go:141] libmachine: IP: 192.168.105.5
	I1207 12:12:07.952048    2857 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I1207 12:12:08.958448    2857 machine.go:88] provisioning docker machine ...
	I1207 12:12:08.958462    2857 buildroot.go:166] provisioning hostname "image-203000"
	I1207 12:12:08.958493    2857 main.go:141] libmachine: Using SSH client type: native
	I1207 12:12:08.958741    2857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005f2a70] 0x1005f51e0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I1207 12:12:08.958745    2857 main.go:141] libmachine: About to run SSH command:
	sudo hostname image-203000 && echo "image-203000" | sudo tee /etc/hostname
	I1207 12:12:09.025077    2857 main.go:141] libmachine: SSH cmd err, output: <nil>: image-203000
	
	I1207 12:12:09.025123    2857 main.go:141] libmachine: Using SSH client type: native
	I1207 12:12:09.025375    2857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005f2a70] 0x1005f51e0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I1207 12:12:09.025382    2857 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\simage-203000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 image-203000/g' /etc/hosts;
				else 
					echo '127.0.1.1 image-203000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 12:12:09.091910    2857 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 12:12:09.091918    2857 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17719-1328/.minikube CaCertPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17719-1328/.minikube}
	I1207 12:12:09.091924    2857 buildroot.go:174] setting up certificates
	I1207 12:12:09.091928    2857 provision.go:83] configureAuth start
	I1207 12:12:09.091930    2857 provision.go:138] copyHostCerts
	I1207 12:12:09.091996    2857 exec_runner.go:144] found /Users/jenkins/minikube-integration/17719-1328/.minikube/ca.pem, removing ...
	I1207 12:12:09.092000    2857 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17719-1328/.minikube/ca.pem
	I1207 12:12:09.092117    2857 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17719-1328/.minikube/ca.pem (1078 bytes)
	I1207 12:12:09.092301    2857 exec_runner.go:144] found /Users/jenkins/minikube-integration/17719-1328/.minikube/cert.pem, removing ...
	I1207 12:12:09.092303    2857 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17719-1328/.minikube/cert.pem
	I1207 12:12:09.092354    2857 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17719-1328/.minikube/cert.pem (1123 bytes)
	I1207 12:12:09.092463    2857 exec_runner.go:144] found /Users/jenkins/minikube-integration/17719-1328/.minikube/key.pem, removing ...
	I1207 12:12:09.092465    2857 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17719-1328/.minikube/key.pem
	I1207 12:12:09.092512    2857 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17719-1328/.minikube/key.pem (1679 bytes)
	I1207 12:12:09.092615    2857 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca-key.pem org=jenkins.image-203000 san=[192.168.105.5 192.168.105.5 localhost 127.0.0.1 minikube image-203000]
	I1207 12:12:09.445767    2857 provision.go:172] copyRemoteCerts
	I1207 12:12:09.445816    2857 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 12:12:09.445826    2857 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/image-203000/id_rsa Username:docker}
	I1207 12:12:09.479465    2857 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1207 12:12:09.486320    2857 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1207 12:12:09.493075    2857 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1207 12:12:09.500286    2857 provision.go:86] duration metric: configureAuth took 408.365291ms
	I1207 12:12:09.500292    2857 buildroot.go:189] setting minikube options for container-runtime
	I1207 12:12:09.500385    2857 config.go:182] Loaded profile config "image-203000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:12:09.500415    2857 main.go:141] libmachine: Using SSH client type: native
	I1207 12:12:09.500630    2857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005f2a70] 0x1005f51e0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I1207 12:12:09.500636    2857 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1207 12:12:09.562128    2857 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1207 12:12:09.562133    2857 buildroot.go:70] root file system type: tmpfs
	I1207 12:12:09.562197    2857 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1207 12:12:09.562255    2857 main.go:141] libmachine: Using SSH client type: native
	I1207 12:12:09.562509    2857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005f2a70] 0x1005f51e0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I1207 12:12:09.562545    2857 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1207 12:12:09.628652    2857 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1207 12:12:09.628702    2857 main.go:141] libmachine: Using SSH client type: native
	I1207 12:12:09.628943    2857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005f2a70] 0x1005f51e0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I1207 12:12:09.628950    2857 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1207 12:12:09.957199    2857 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1207 12:12:09.957207    2857 machine.go:91] provisioned docker machine in 998.777125ms
	I1207 12:12:09.957211    2857 client.go:171] LocalClient.Create took 16.479193667s
	I1207 12:12:09.957224    2857 start.go:167] duration metric: libmachine.API.Create for "image-203000" took 16.479236875s
	I1207 12:12:09.957227    2857 start.go:300] post-start starting for "image-203000" (driver="qemu2")
	I1207 12:12:09.957231    2857 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 12:12:09.957291    2857 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 12:12:09.957298    2857 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/image-203000/id_rsa Username:docker}
	I1207 12:12:09.991506    2857 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 12:12:09.992800    2857 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 12:12:09.992809    2857 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17719-1328/.minikube/addons for local assets ...
	I1207 12:12:09.992885    2857 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17719-1328/.minikube/files for local assets ...
	I1207 12:12:09.992994    2857 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17719-1328/.minikube/files/etc/ssl/certs/17682.pem -> 17682.pem in /etc/ssl/certs
	I1207 12:12:09.993100    2857 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 12:12:09.995957    2857 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/files/etc/ssl/certs/17682.pem --> /etc/ssl/certs/17682.pem (1708 bytes)
	I1207 12:12:10.002821    2857 start.go:303] post-start completed in 45.574833ms
	I1207 12:12:10.003226    2857 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/image-203000/config.json ...
	I1207 12:12:10.003393    2857 start.go:128] duration metric: createHost completed in 16.556254167s
	I1207 12:12:10.003415    2857 main.go:141] libmachine: Using SSH client type: native
	I1207 12:12:10.003624    2857 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1005f2a70] 0x1005f51e0 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I1207 12:12:10.003627    2857 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 12:12:10.061491    2857 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701979929.809949252
	
	I1207 12:12:10.061498    2857 fix.go:206] guest clock: 1701979929.809949252
	I1207 12:12:10.061501    2857 fix.go:219] Guest: 2023-12-07 12:12:09.809949252 -0800 PST Remote: 2023-12-07 12:12:10.003394 -0800 PST m=+16.663569834 (delta=-193.444748ms)
	I1207 12:12:10.061510    2857 fix.go:190] guest clock delta is within tolerance: -193.444748ms
	I1207 12:12:10.061511    2857 start.go:83] releasing machines lock for "image-203000", held for 16.614410333s
	I1207 12:12:10.061794    2857 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 12:12:10.061794    2857 ssh_runner.go:195] Run: cat /version.json
	I1207 12:12:10.061803    2857 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/image-203000/id_rsa Username:docker}
	I1207 12:12:10.061812    2857 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/image-203000/id_rsa Username:docker}
	I1207 12:12:10.139891    2857 ssh_runner.go:195] Run: systemctl --version
	I1207 12:12:10.141999    2857 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 12:12:10.143906    2857 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 12:12:10.143935    2857 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1207 12:12:10.149247    2857 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 12:12:10.149251    2857 start.go:475] detecting cgroup driver to use...
	I1207 12:12:10.149312    2857 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 12:12:10.154674    2857 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1207 12:12:10.157612    2857 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1207 12:12:10.160666    2857 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1207 12:12:10.160685    2857 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1207 12:12:10.163954    2857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1207 12:12:10.167142    2857 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1207 12:12:10.170222    2857 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1207 12:12:10.173163    2857 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 12:12:10.176443    2857 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1207 12:12:10.179739    2857 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 12:12:10.182413    2857 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 12:12:10.185045    2857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 12:12:10.248796    2857 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1207 12:12:10.255170    2857 start.go:475] detecting cgroup driver to use...
	I1207 12:12:10.255221    2857 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1207 12:12:10.263661    2857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 12:12:10.268506    2857 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 12:12:10.275388    2857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 12:12:10.279691    2857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1207 12:12:10.284429    2857 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1207 12:12:10.326715    2857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1207 12:12:10.331702    2857 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 12:12:10.337001    2857 ssh_runner.go:195] Run: which cri-dockerd
	I1207 12:12:10.338377    2857 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1207 12:12:10.340857    2857 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1207 12:12:10.346095    2857 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1207 12:12:10.433141    2857 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1207 12:12:10.512649    2857 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1207 12:12:10.512697    2857 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1207 12:12:10.518300    2857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 12:12:10.597178    2857 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1207 12:12:11.752910    2857 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.15574825s)
	I1207 12:12:11.752972    2857 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1207 12:12:11.821337    2857 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1207 12:12:11.883679    2857 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1207 12:12:11.965323    2857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 12:12:12.047859    2857 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1207 12:12:12.055298    2857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 12:12:12.136034    2857 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1207 12:12:12.160383    2857 start.go:522] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1207 12:12:12.160447    2857 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1207 12:12:12.162844    2857 start.go:543] Will wait 60s for crictl version
	I1207 12:12:12.162872    2857 ssh_runner.go:195] Run: which crictl
	I1207 12:12:12.164119    2857 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1207 12:12:12.183840    2857 start.go:559] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1207 12:12:12.183904    2857 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1207 12:12:12.193492    2857 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1207 12:12:12.208920    2857 out.go:204] * Preparing Kubernetes v1.28.4 on Docker 24.0.7 ...
	I1207 12:12:12.209047    2857 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I1207 12:12:12.210411    2857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 12:12:12.213964    2857 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 12:12:12.214020    2857 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1207 12:12:12.219178    2857 docker.go:671] Got preloaded images: 
	I1207 12:12:12.219182    2857 docker.go:677] registry.k8s.io/kube-apiserver:v1.28.4 wasn't preloaded
	I1207 12:12:12.219219    2857 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1207 12:12:12.222150    2857 ssh_runner.go:195] Run: which lz4
	I1207 12:12:12.223407    2857 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1207 12:12:12.224620    2857 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1207 12:12:12.224627    2857 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (357941720 bytes)
	I1207 12:12:13.527775    2857 docker.go:635] Took 1.304433 seconds to copy over tarball
	I1207 12:12:13.527824    2857 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1207 12:12:14.592357    2857 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.064535s)
	I1207 12:12:14.592375    2857 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1207 12:12:14.608296    2857 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1207 12:12:14.611811    2857 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1207 12:12:14.617085    2857 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 12:12:14.691452    2857 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1207 12:12:16.190880    2857 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.499451541s)
	I1207 12:12:16.190957    2857 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1207 12:12:16.196806    2857 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.4
	registry.k8s.io/kube-scheduler:v1.28.4
	registry.k8s.io/kube-controller-manager:v1.28.4
	registry.k8s.io/kube-proxy:v1.28.4
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1207 12:12:16.196814    2857 cache_images.go:84] Images are preloaded, skipping loading
	I1207 12:12:16.196865    2857 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1207 12:12:16.204619    2857 cni.go:84] Creating CNI manager for ""
	I1207 12:12:16.204625    2857 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 12:12:16.204634    2857 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 12:12:16.204643    2857 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:image-203000 NodeName:image-203000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1207 12:12:16.204718    2857 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "image-203000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 12:12:16.204748    2857 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=image-203000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:image-203000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 12:12:16.204794    2857 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1207 12:12:16.208108    2857 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 12:12:16.208129    2857 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 12:12:16.211236    2857 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1207 12:12:16.216279    2857 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1207 12:12:16.221368    2857 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I1207 12:12:16.226352    2857 ssh_runner.go:195] Run: grep 192.168.105.5	control-plane.minikube.internal$ /etc/hosts
	I1207 12:12:16.227498    2857 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 12:12:16.231653    2857 certs.go:56] Setting up /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/image-203000 for IP: 192.168.105.5
	I1207 12:12:16.231660    2857 certs.go:190] acquiring lock for shared ca certs: {Name:mka2d4ba9e36871ccc0bd079595857e1e300747f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:12:16.231795    2857 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17719-1328/.minikube/ca.key
	I1207 12:12:16.231837    2857 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17719-1328/.minikube/proxy-client-ca.key
	I1207 12:12:16.231866    2857 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/image-203000/client.key
	I1207 12:12:16.231872    2857 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/image-203000/client.crt with IP's: []
	I1207 12:12:16.376711    2857 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/image-203000/client.crt ...
	I1207 12:12:16.376716    2857 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/image-203000/client.crt: {Name:mkf024613aaa6cfa1ff622d8a232cd1a6f9dd55a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:12:16.376990    2857 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/image-203000/client.key ...
	I1207 12:12:16.376992    2857 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/image-203000/client.key: {Name:mk35e8e01b6275cf09e3a48339f93197c8162846 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:12:16.377105    2857 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/image-203000/apiserver.key.e69b33ca
	I1207 12:12:16.377111    2857 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/image-203000/apiserver.crt.e69b33ca with IP's: [192.168.105.5 10.96.0.1 127.0.0.1 10.0.0.1]
	I1207 12:12:16.466310    2857 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/image-203000/apiserver.crt.e69b33ca ...
	I1207 12:12:16.466314    2857 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/image-203000/apiserver.crt.e69b33ca: {Name:mk82da86fa977c446269b72fd63f8a7b5b0b5f64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:12:16.466449    2857 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/image-203000/apiserver.key.e69b33ca ...
	I1207 12:12:16.466451    2857 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/image-203000/apiserver.key.e69b33ca: {Name:mk65435ae631830df54e89d3a7794c666163297a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:12:16.466563    2857 certs.go:337] copying /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/image-203000/apiserver.crt.e69b33ca -> /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/image-203000/apiserver.crt
	I1207 12:12:16.466786    2857 certs.go:341] copying /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/image-203000/apiserver.key.e69b33ca -> /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/image-203000/apiserver.key
	I1207 12:12:16.466908    2857 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/image-203000/proxy-client.key
	I1207 12:12:16.466915    2857 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/image-203000/proxy-client.crt with IP's: []
	I1207 12:12:16.577502    2857 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/image-203000/proxy-client.crt ...
	I1207 12:12:16.577507    2857 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/image-203000/proxy-client.crt: {Name:mk115eddd8ed3592a3d0cca795b5207741296156 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:12:16.577727    2857 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/image-203000/proxy-client.key ...
	I1207 12:12:16.577730    2857 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/image-203000/proxy-client.key: {Name:mkf2c65e3361f7e0dd0603404b6f481337b231a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:12:16.577957    2857 certs.go:437] found cert: /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/1768.pem (1338 bytes)
	W1207 12:12:16.577985    2857 certs.go:433] ignoring /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/1768_empty.pem, impossibly tiny 0 bytes
	I1207 12:12:16.577989    2857 certs.go:437] found cert: /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 12:12:16.578006    2857 certs.go:437] found cert: /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem (1078 bytes)
	I1207 12:12:16.578021    2857 certs.go:437] found cert: /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem (1123 bytes)
	I1207 12:12:16.578035    2857 certs.go:437] found cert: /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/key.pem (1679 bytes)
	I1207 12:12:16.578068    2857 certs.go:437] found cert: /Users/jenkins/minikube-integration/17719-1328/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17719-1328/.minikube/files/etc/ssl/certs/17682.pem (1708 bytes)
	I1207 12:12:16.578349    2857 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/image-203000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 12:12:16.585806    2857 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/image-203000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1207 12:12:16.592536    2857 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/image-203000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 12:12:16.599817    2857 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/image-203000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1207 12:12:16.606641    2857 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 12:12:16.613372    2857 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 12:12:16.620187    2857 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 12:12:16.627266    2857 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 12:12:16.633846    2857 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/1768.pem --> /usr/share/ca-certificates/1768.pem (1338 bytes)
	I1207 12:12:16.640568    2857 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/files/etc/ssl/certs/17682.pem --> /usr/share/ca-certificates/17682.pem (1708 bytes)
	I1207 12:12:16.647655    2857 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 12:12:16.654415    2857 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 12:12:16.659298    2857 ssh_runner.go:195] Run: openssl version
	I1207 12:12:16.661338    2857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17682.pem && ln -fs /usr/share/ca-certificates/17682.pem /etc/ssl/certs/17682.pem"
	I1207 12:12:16.664651    2857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17682.pem
	I1207 12:12:16.666120    2857 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:08 /usr/share/ca-certificates/17682.pem
	I1207 12:12:16.666138    2857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17682.pem
	I1207 12:12:16.667863    2857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17682.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 12:12:16.671057    2857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 12:12:16.674045    2857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 12:12:16.675403    2857 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:02 /usr/share/ca-certificates/minikubeCA.pem
	I1207 12:12:16.675417    2857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 12:12:16.677271    2857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 12:12:16.680532    2857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1768.pem && ln -fs /usr/share/ca-certificates/1768.pem /etc/ssl/certs/1768.pem"
	I1207 12:12:16.683851    2857 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1768.pem
	I1207 12:12:16.685258    2857 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:08 /usr/share/ca-certificates/1768.pem
	I1207 12:12:16.685275    2857 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1768.pem
	I1207 12:12:16.686996    2857 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1768.pem /etc/ssl/certs/51391683.0"
	I1207 12:12:16.689891    2857 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 12:12:16.691143    2857 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1207 12:12:16.691174    2857 kubeadm.go:404] StartCluster: {Name:image-203000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:image-203000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:12:16.691235    2857 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1207 12:12:16.696494    2857 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 12:12:16.699830    2857 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 12:12:16.702970    2857 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 12:12:16.705748    2857 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 12:12:16.705765    2857 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1207 12:12:16.727246    2857 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1207 12:12:16.727277    2857 kubeadm.go:322] [preflight] Running pre-flight checks
	I1207 12:12:16.791609    2857 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 12:12:16.791677    2857 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 12:12:16.791727    2857 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1207 12:12:16.889404    2857 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 12:12:16.904590    2857 out.go:204]   - Generating certificates and keys ...
	I1207 12:12:16.904634    2857 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1207 12:12:16.904665    2857 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1207 12:12:16.972363    2857 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1207 12:12:17.048221    2857 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1207 12:12:17.126654    2857 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1207 12:12:17.203495    2857 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1207 12:12:17.301981    2857 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1207 12:12:17.302042    2857 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [image-203000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I1207 12:12:17.360877    2857 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1207 12:12:17.360952    2857 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [image-203000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I1207 12:12:17.420928    2857 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1207 12:12:17.487882    2857 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1207 12:12:17.524174    2857 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1207 12:12:17.524199    2857 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 12:12:17.665580    2857 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 12:12:17.744317    2857 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 12:12:17.828783    2857 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 12:12:17.967709    2857 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 12:12:17.967932    2857 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 12:12:17.969060    2857 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 12:12:17.981367    2857 out.go:204]   - Booting up control plane ...
	I1207 12:12:17.981430    2857 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 12:12:17.981471    2857 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 12:12:17.981506    2857 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 12:12:17.981560    2857 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 12:12:17.981620    2857 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 12:12:17.981640    2857 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1207 12:12:18.055658    2857 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1207 12:12:22.056803    2857 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.001166 seconds
	I1207 12:12:22.056865    2857 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 12:12:22.062602    2857 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 12:12:22.571445    2857 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 12:12:22.571547    2857 kubeadm.go:322] [mark-control-plane] Marking the node image-203000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1207 12:12:23.076412    2857 kubeadm.go:322] [bootstrap-token] Using token: vg94ps.7rr4avx8m38t5rsn
	I1207 12:12:23.089651    2857 out.go:204]   - Configuring RBAC rules ...
	I1207 12:12:23.089736    2857 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 12:12:23.089789    2857 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 12:12:23.091446    2857 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 12:12:23.092527    2857 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 12:12:23.093930    2857 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 12:12:23.095217    2857 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 12:12:23.100110    2857 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 12:12:23.278611    2857 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1207 12:12:23.486001    2857 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1207 12:12:23.486380    2857 kubeadm.go:322] 
	I1207 12:12:23.486412    2857 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1207 12:12:23.486414    2857 kubeadm.go:322] 
	I1207 12:12:23.486456    2857 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1207 12:12:23.486459    2857 kubeadm.go:322] 
	I1207 12:12:23.486475    2857 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1207 12:12:23.486510    2857 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 12:12:23.486535    2857 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 12:12:23.486537    2857 kubeadm.go:322] 
	I1207 12:12:23.486564    2857 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1207 12:12:23.486567    2857 kubeadm.go:322] 
	I1207 12:12:23.486596    2857 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1207 12:12:23.486597    2857 kubeadm.go:322] 
	I1207 12:12:23.486624    2857 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1207 12:12:23.486663    2857 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 12:12:23.486700    2857 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 12:12:23.486701    2857 kubeadm.go:322] 
	I1207 12:12:23.486749    2857 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 12:12:23.486789    2857 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1207 12:12:23.486791    2857 kubeadm.go:322] 
	I1207 12:12:23.486832    2857 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token vg94ps.7rr4avx8m38t5rsn \
	I1207 12:12:23.486886    2857 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:828939e74f1d12618d8bb944cf208455a494cd79da1e765a74ad9e48dba341a3 \
	I1207 12:12:23.486896    2857 kubeadm.go:322] 	--control-plane 
	I1207 12:12:23.486898    2857 kubeadm.go:322] 
	I1207 12:12:23.486957    2857 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1207 12:12:23.486971    2857 kubeadm.go:322] 
	I1207 12:12:23.487021    2857 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token vg94ps.7rr4avx8m38t5rsn \
	I1207 12:12:23.487076    2857 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:828939e74f1d12618d8bb944cf208455a494cd79da1e765a74ad9e48dba341a3 
	I1207 12:12:23.487131    2857 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 12:12:23.487137    2857 cni.go:84] Creating CNI manager for ""
	I1207 12:12:23.487144    2857 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 12:12:23.491629    2857 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1207 12:12:23.498459    2857 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1207 12:12:23.501467    2857 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1207 12:12:23.506114    2857 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 12:12:23.506148    2857 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:12:23.506157    2857 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c minikube.k8s.io/name=image-203000 minikube.k8s.io/updated_at=2023_12_07T12_12_23_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:12:23.565552    2857 kubeadm.go:1088] duration metric: took 59.432833ms to wait for elevateKubeSystemPrivileges.
	I1207 12:12:23.565562    2857 ops.go:34] apiserver oom_adj: -16
	I1207 12:12:23.565565    2857 kubeadm.go:406] StartCluster complete in 6.874564541s
	I1207 12:12:23.565573    2857 settings.go:142] acquiring lock: {Name:mk64a7588accf4b6bd8e16cdbaa1b2c1768d52b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:12:23.565642    2857 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:12:23.566053    2857 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/kubeconfig: {Name:mk1f9e67cb7d73aba54460262958078aba7f1051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:12:23.566235    2857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 12:12:23.566284    2857 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1207 12:12:23.566318    2857 addons.go:69] Setting storage-provisioner=true in profile "image-203000"
	I1207 12:12:23.566324    2857 addons.go:231] Setting addon storage-provisioner=true in "image-203000"
	I1207 12:12:23.566344    2857 host.go:66] Checking if "image-203000" exists ...
	I1207 12:12:23.566352    2857 addons.go:69] Setting default-storageclass=true in profile "image-203000"
	I1207 12:12:23.566358    2857 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "image-203000"
	I1207 12:12:23.566362    2857 config.go:182] Loaded profile config "image-203000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:12:23.567459    2857 addons.go:231] Setting addon default-storageclass=true in "image-203000"
	I1207 12:12:23.567465    2857 host.go:66] Checking if "image-203000" exists ...
	I1207 12:12:23.572576    2857 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 12:12:23.576618    2857 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 12:12:23.576621    2857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 12:12:23.576629    2857 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/image-203000/id_rsa Username:docker}
	I1207 12:12:23.577498    2857 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 12:12:23.577501    2857 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 12:12:23.577505    2857 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/image-203000/id_rsa Username:docker}
	I1207 12:12:23.583687    2857 kapi.go:248] "coredns" deployment in "kube-system" namespace and "image-203000" context rescaled to 1 replicas
	I1207 12:12:23.583700    2857 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:12:23.590509    2857 out.go:177] * Verifying Kubernetes components...
	I1207 12:12:23.594531    2857 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 12:12:23.616837    2857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 12:12:23.619441    2857 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 12:12:23.619758    2857 api_server.go:52] waiting for apiserver process to appear ...
	I1207 12:12:23.619782    2857 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 12:12:23.624782    2857 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 12:12:24.125355    2857 start.go:929] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I1207 12:12:24.125362    2857 api_server.go:72] duration metric: took 541.664333ms to wait for apiserver process to appear ...
	I1207 12:12:24.125368    2857 api_server.go:88] waiting for apiserver healthz status ...
	I1207 12:12:24.125374    2857 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I1207 12:12:24.128262    2857 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I1207 12:12:24.129202    2857 api_server.go:141] control plane version: v1.28.4
	I1207 12:12:24.129205    2857 api_server.go:131] duration metric: took 3.836208ms to wait for apiserver health ...
	I1207 12:12:24.129210    2857 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 12:12:24.137426    2857 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1207 12:12:24.131531    2857 system_pods.go:59] 5 kube-system pods found
	I1207 12:12:24.141384    2857 system_pods.go:61] "etcd-image-203000" [51de2915-74fa-491d-bc60-0ddd6cfbe635] Pending
	I1207 12:12:24.141383    2857 addons.go:502] enable addons completed in 575.11675ms: enabled=[storage-provisioner default-storageclass]
	I1207 12:12:24.141387    2857 system_pods.go:61] "kube-apiserver-image-203000" [52487858-01ad-491e-bed1-2f7ca6d40038] Pending
	I1207 12:12:24.141389    2857 system_pods.go:61] "kube-controller-manager-image-203000" [73acfb8f-5dbd-4ae1-8c70-7f2a48617e1f] Pending
	I1207 12:12:24.141391    2857 system_pods.go:61] "kube-scheduler-image-203000" [2527a076-3e2b-404d-8760-a35085d30f49] Pending
	I1207 12:12:24.141393    2857 system_pods.go:61] "storage-provisioner" [ffea15aa-1849-4e35-affc-6063aaf06e58] Pending
	I1207 12:12:24.141395    2857 system_pods.go:74] duration metric: took 12.183041ms to wait for pod list to return data ...
	I1207 12:12:24.141398    2857 kubeadm.go:581] duration metric: took 557.702917ms to wait for : map[apiserver:true system_pods:true] ...
	I1207 12:12:24.141403    2857 node_conditions.go:102] verifying NodePressure condition ...
	I1207 12:12:24.142574    2857 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I1207 12:12:24.142581    2857 node_conditions.go:123] node cpu capacity is 2
	I1207 12:12:24.142584    2857 node_conditions.go:105] duration metric: took 1.179916ms to run NodePressure ...
	I1207 12:12:24.142589    2857 start.go:228] waiting for startup goroutines ...
	I1207 12:12:24.142591    2857 start.go:233] waiting for cluster config update ...
	I1207 12:12:24.142595    2857 start.go:242] writing updated cluster config ...
	I1207 12:12:24.142793    2857 ssh_runner.go:195] Run: rm -f paused
	I1207 12:12:24.170841    2857 start.go:600] kubectl: 1.28.2, cluster: 1.28.4 (minor skew: 0)
	I1207 12:12:24.175483    2857 out.go:177] * Done! kubectl is now configured to use "image-203000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-12-07 20:12:06 UTC, ends at Thu 2023-12-07 20:12:26 UTC. --
	Dec 07 20:12:19 image-203000 cri-dockerd[1006]: time="2023-12-07T20:12:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6fbdcbf949932e2121fbbe153081173768d611776347b61b29b204d43e0c7e67/resolv.conf as [nameserver 192.168.105.1]"
	Dec 07 20:12:19 image-203000 cri-dockerd[1006]: time="2023-12-07T20:12:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/63f307998e38a324f99af7fb6ef73af1e5f08162f4a87aaacdf594af21fcc3fa/resolv.conf as [nameserver 192.168.105.1]"
	Dec 07 20:12:19 image-203000 dockerd[1120]: time="2023-12-07T20:12:19.061546298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 07 20:12:19 image-203000 dockerd[1120]: time="2023-12-07T20:12:19.061676714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 07 20:12:19 image-203000 dockerd[1120]: time="2023-12-07T20:12:19.061705756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 07 20:12:19 image-203000 dockerd[1120]: time="2023-12-07T20:12:19.061742714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 07 20:12:19 image-203000 dockerd[1120]: time="2023-12-07T20:12:19.062480006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 07 20:12:19 image-203000 dockerd[1120]: time="2023-12-07T20:12:19.062533881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 07 20:12:19 image-203000 dockerd[1120]: time="2023-12-07T20:12:19.062580006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 07 20:12:19 image-203000 dockerd[1120]: time="2023-12-07T20:12:19.062609506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 07 20:12:19 image-203000 dockerd[1120]: time="2023-12-07T20:12:19.092962131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 07 20:12:19 image-203000 dockerd[1120]: time="2023-12-07T20:12:19.093132173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 07 20:12:19 image-203000 dockerd[1120]: time="2023-12-07T20:12:19.093146006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 07 20:12:19 image-203000 dockerd[1120]: time="2023-12-07T20:12:19.093154923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 07 20:12:25 image-203000 dockerd[1114]: time="2023-12-07T20:12:25.526894717Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Dec 07 20:12:25 image-203000 dockerd[1114]: time="2023-12-07T20:12:25.651058342Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Dec 07 20:12:25 image-203000 dockerd[1114]: time="2023-12-07T20:12:25.670211092Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Dec 07 20:12:25 image-203000 dockerd[1120]: time="2023-12-07T20:12:25.710303509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 07 20:12:25 image-203000 dockerd[1120]: time="2023-12-07T20:12:25.710332217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 07 20:12:25 image-203000 dockerd[1120]: time="2023-12-07T20:12:25.710338676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 07 20:12:25 image-203000 dockerd[1120]: time="2023-12-07T20:12:25.710521967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 07 20:12:25 image-203000 dockerd[1114]: time="2023-12-07T20:12:25.857271051Z" level=info msg="ignoring event" container=a64f0dd28b61b0cbaa1fa303f723dfc9373b7fa6d468eee4d736367658fc1de4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:12:25 image-203000 dockerd[1120]: time="2023-12-07T20:12:25.858052759Z" level=info msg="shim disconnected" id=a64f0dd28b61b0cbaa1fa303f723dfc9373b7fa6d468eee4d736367658fc1de4 namespace=moby
	Dec 07 20:12:25 image-203000 dockerd[1120]: time="2023-12-07T20:12:25.858144176Z" level=warning msg="cleaning up after shim disconnected" id=a64f0dd28b61b0cbaa1fa303f723dfc9373b7fa6d468eee4d736367658fc1de4 namespace=moby
	Dec 07 20:12:25 image-203000 dockerd[1120]: time="2023-12-07T20:12:25.858153551Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	cf2a4c5b5f22c       9961cbceaf234       7 seconds ago       Running             kube-controller-manager   0                   63f307998e38a       kube-controller-manager-image-203000
	74c386c8b9a0e       9cdd6470f48c8       7 seconds ago       Running             etcd                      0                   6fbdcbf949932       etcd-image-203000
	40c4e3e6f18ce       04b4c447bb9d4       7 seconds ago       Running             kube-apiserver            0                   950c54990c689       kube-apiserver-image-203000
	5d1613d121f5f       05c284c929889       8 seconds ago       Running             kube-scheduler            0                   b52d749ff6ea0       kube-scheduler-image-203000
	
	* 
	* ==> describe nodes <==
	* Name:               image-203000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=image-203000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c
	                    minikube.k8s.io/name=image-203000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_07T12_12_23_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Dec 2023 20:12:20 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  image-203000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Dec 2023 20:12:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Dec 2023 20:12:23 +0000   Thu, 07 Dec 2023 20:12:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Dec 2023 20:12:23 +0000   Thu, 07 Dec 2023 20:12:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Dec 2023 20:12:23 +0000   Thu, 07 Dec 2023 20:12:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Thu, 07 Dec 2023 20:12:23 +0000   Thu, 07 Dec 2023 20:12:19 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    image-203000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904696Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904696Ki
	  pods:               110
	System Info:
	  Machine ID:                 8284fc1c01834fd3951ab08baabbe3fb
	  System UUID:                8284fc1c01834fd3951ab08baabbe3fb
	  Boot ID:                    000b15a0-84db-4706-afca-4c001fc67169
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-image-203000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3s
	  kube-system                 kube-apiserver-image-203000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-controller-manager-image-203000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  kube-system                 kube-scheduler-image-203000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (2%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age              From     Message
	  ----    ------                   ----             ----     -------
	  Normal  Starting                 8s               kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)  kubelet  Node image-203000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)  kubelet  Node image-203000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)  kubelet  Node image-203000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s               kubelet  Updated Node Allocatable limit across pods
	  Normal  Starting                 3s               kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  3s               kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3s               kubelet  Node image-203000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3s               kubelet  Node image-203000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3s               kubelet  Node image-203000 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [Dec 7 20:12] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.650713] EINJ: EINJ table not found.
	[  +0.550296] systemd-fstab-generator[116]: Ignoring "noauto" for root device
	[  +3.459358] systemd-fstab-generator[486]: Ignoring "noauto" for root device
	[  +0.064267] systemd-fstab-generator[497]: Ignoring "noauto" for root device
	[  +0.417402] kauditd_printk_skb: 43 callbacks suppressed
	[  +0.003529] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.184889] systemd-fstab-generator[714]: Ignoring "noauto" for root device
	[  +0.080894] systemd-fstab-generator[725]: Ignoring "noauto" for root device
	[  +0.083348] systemd-fstab-generator[738]: Ignoring "noauto" for root device
	[  +1.223648] systemd-fstab-generator[925]: Ignoring "noauto" for root device
	[  +0.065356] systemd-fstab-generator[936]: Ignoring "noauto" for root device
	[  +0.081507] systemd-fstab-generator[947]: Ignoring "noauto" for root device
	[  +0.081464] systemd-fstab-generator[958]: Ignoring "noauto" for root device
	[  +0.089092] systemd-fstab-generator[999]: Ignoring "noauto" for root device
	[  +2.554240] systemd-fstab-generator[1107]: Ignoring "noauto" for root device
	[  +1.480738] kauditd_printk_skb: 149 callbacks suppressed
	[  +1.880055] systemd-fstab-generator[1489]: Ignoring "noauto" for root device
	[  +5.117293] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.005124] systemd-fstab-generator[2255]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [74c386c8b9a0] <==
	* {"level":"info","ts":"2023-12-07T20:12:19.206937Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-07T20:12:19.207326Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-07T20:12:19.207006Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-12-07T20:12:19.207424Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-12-07T20:12:19.207248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 switched to configuration voters=(6403572207504089856)"}
	{"level":"info","ts":"2023-12-07T20:12:19.207519Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","added-peer-id":"58de0efec1d86300","added-peer-peer-urls":["https://192.168.105.5:2380"]}
	{"level":"info","ts":"2023-12-07T20:12:19.207561Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-12-07T20:12:19.298457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 is starting a new election at term 1"}
	{"level":"info","ts":"2023-12-07T20:12:19.298481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-12-07T20:12:19.298549Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgPreVoteResp from 58de0efec1d86300 at term 1"}
	{"level":"info","ts":"2023-12-07T20:12:19.298557Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became candidate at term 2"}
	{"level":"info","ts":"2023-12-07T20:12:19.29856Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-12-07T20:12:19.298565Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 2"}
	{"level":"info","ts":"2023-12-07T20:12:19.298569Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-12-07T20:12:19.306694Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:image-203000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2023-12-07T20:12:19.306776Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T20:12:19.306819Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-07T20:12:19.307297Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-12-07T20:12:19.307322Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-12-07T20:12:19.307651Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.5:2379"}
	{"level":"info","ts":"2023-12-07T20:12:19.30867Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-12-07T20:12:19.308701Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-12-07T20:12:19.319984Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T20:12:19.320721Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-12-07T20:12:19.320736Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	* 
	* ==> kernel <==
	*  20:12:26 up 0 min,  0 users,  load average: 0.23, 0.05, 0.02
	Linux image-203000 5.10.57 #1 SMP PREEMPT Tue Dec 5 16:07:42 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [40c4e3e6f18c] <==
	* I1207 20:12:20.418369       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1207 20:12:20.418387       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1207 20:12:20.418414       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1207 20:12:20.418430       1 shared_informer.go:318] Caches are synced for configmaps
	I1207 20:12:20.419061       1 controller.go:624] quota admission added evaluator for: namespaces
	I1207 20:12:20.419126       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1207 20:12:20.438000       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1207 20:12:20.439001       1 aggregator.go:166] initial CRD sync complete...
	I1207 20:12:20.439040       1 autoregister_controller.go:141] Starting autoregister controller
	I1207 20:12:20.439058       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1207 20:12:20.439075       1 cache.go:39] Caches are synced for autoregister controller
	I1207 20:12:20.606563       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 20:12:21.322525       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1207 20:12:21.324220       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1207 20:12:21.324264       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1207 20:12:21.453546       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 20:12:21.464470       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1207 20:12:21.525646       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1207 20:12:21.527266       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I1207 20:12:21.527555       1 controller.go:624] quota admission added evaluator for: endpoints
	I1207 20:12:21.528792       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 20:12:22.344723       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1207 20:12:23.022789       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1207 20:12:23.026847       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1207 20:12:23.031048       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [cf2a4c5b5f22] <==
	* I1207 20:12:22.371600       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I1207 20:12:22.371606       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I1207 20:12:22.371612       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I1207 20:12:22.374450       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I1207 20:12:22.374508       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I1207 20:12:22.374535       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I1207 20:12:22.377270       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I1207 20:12:22.377521       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I1207 20:12:22.377531       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I1207 20:12:22.441856       1 shared_informer.go:318] Caches are synced for tokens
	I1207 20:12:22.444132       1 controllermanager.go:642] "Started controller" controller="replicaset-controller"
	I1207 20:12:22.444173       1 replica_set.go:214] "Starting controller" name="replicaset"
	I1207 20:12:22.444177       1 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
	I1207 20:12:22.593542       1 controllermanager.go:642] "Started controller" controller="statefulset-controller"
	I1207 20:12:22.593582       1 stateful_set.go:161] "Starting stateful set controller"
	I1207 20:12:22.593586       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I1207 20:12:22.743809       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I1207 20:12:22.743837       1 ttl_controller.go:124] "Starting TTL controller"
	I1207 20:12:22.743842       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I1207 20:12:22.894512       1 controllermanager.go:642] "Started controller" controller="ephemeral-volume-controller"
	I1207 20:12:22.894560       1 controller.go:169] "Starting ephemeral volume controller"
	I1207 20:12:22.894566       1 shared_informer.go:311] Waiting for caches to sync for ephemeral
	I1207 20:12:23.044564       1 controllermanager.go:642] "Started controller" controller="endpoints-controller"
	I1207 20:12:23.044640       1 endpoints_controller.go:174] "Starting endpoint controller"
	I1207 20:12:23.044646       1 shared_informer.go:311] Waiting for caches to sync for endpoint
	
	* 
	* ==> kube-scheduler [5d1613d121f5] <==
	* W1207 20:12:20.363238       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1207 20:12:20.363256       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1207 20:12:20.363302       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1207 20:12:20.363334       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1207 20:12:20.363369       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1207 20:12:20.363399       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1207 20:12:20.363437       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1207 20:12:20.363453       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1207 20:12:20.363495       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1207 20:12:20.363523       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1207 20:12:20.363567       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1207 20:12:20.363585       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1207 20:12:20.363614       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1207 20:12:20.363646       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1207 20:12:20.363675       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1207 20:12:20.363693       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1207 20:12:20.363757       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1207 20:12:20.363779       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1207 20:12:21.167068       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1207 20:12:21.167102       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1207 20:12:21.208855       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1207 20:12:21.208862       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1207 20:12:21.404734       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1207 20:12:21.404828       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1207 20:12:21.660034       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-12-07 20:12:06 UTC, ends at Thu 2023-12-07 20:12:26 UTC. --
	Dec 07 20:12:23 image-203000 kubelet[2261]: I1207 20:12:23.178635    2261 kubelet_node_status.go:108] "Node was previously registered" node="image-203000"
	Dec 07 20:12:23 image-203000 kubelet[2261]: I1207 20:12:23.178711    2261 kubelet_node_status.go:73] "Successfully registered node" node="image-203000"
	Dec 07 20:12:23 image-203000 kubelet[2261]: I1207 20:12:23.199622    2261 topology_manager.go:215] "Topology Admit Handler" podUID="4d48e7f0fc9e2ad634e455fa439aa1a2" podNamespace="kube-system" podName="etcd-image-203000"
	Dec 07 20:12:23 image-203000 kubelet[2261]: I1207 20:12:23.199719    2261 topology_manager.go:215] "Topology Admit Handler" podUID="4422a4056e231bb65675cd6a645fe32b" podNamespace="kube-system" podName="kube-apiserver-image-203000"
	Dec 07 20:12:23 image-203000 kubelet[2261]: I1207 20:12:23.199749    2261 topology_manager.go:215] "Topology Admit Handler" podUID="e58289c5ad7ced1f93ff5f6b3f5c4760" podNamespace="kube-system" podName="kube-controller-manager-image-203000"
	Dec 07 20:12:23 image-203000 kubelet[2261]: I1207 20:12:23.199763    2261 topology_manager.go:215] "Topology Admit Handler" podUID="1f7436c5e78962396a67a5629c7fe350" podNamespace="kube-system" podName="kube-scheduler-image-203000"
	Dec 07 20:12:23 image-203000 kubelet[2261]: I1207 20:12:23.374258    2261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4422a4056e231bb65675cd6a645fe32b-ca-certs\") pod \"kube-apiserver-image-203000\" (UID: \"4422a4056e231bb65675cd6a645fe32b\") " pod="kube-system/kube-apiserver-image-203000"
	Dec 07 20:12:23 image-203000 kubelet[2261]: I1207 20:12:23.374297    2261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4422a4056e231bb65675cd6a645fe32b-k8s-certs\") pod \"kube-apiserver-image-203000\" (UID: \"4422a4056e231bb65675cd6a645fe32b\") " pod="kube-system/kube-apiserver-image-203000"
	Dec 07 20:12:23 image-203000 kubelet[2261]: I1207 20:12:23.374308    2261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e58289c5ad7ced1f93ff5f6b3f5c4760-ca-certs\") pod \"kube-controller-manager-image-203000\" (UID: \"e58289c5ad7ced1f93ff5f6b3f5c4760\") " pod="kube-system/kube-controller-manager-image-203000"
	Dec 07 20:12:23 image-203000 kubelet[2261]: I1207 20:12:23.374321    2261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e58289c5ad7ced1f93ff5f6b3f5c4760-flexvolume-dir\") pod \"kube-controller-manager-image-203000\" (UID: \"e58289c5ad7ced1f93ff5f6b3f5c4760\") " pod="kube-system/kube-controller-manager-image-203000"
	Dec 07 20:12:23 image-203000 kubelet[2261]: I1207 20:12:23.374334    2261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e58289c5ad7ced1f93ff5f6b3f5c4760-k8s-certs\") pod \"kube-controller-manager-image-203000\" (UID: \"e58289c5ad7ced1f93ff5f6b3f5c4760\") " pod="kube-system/kube-controller-manager-image-203000"
	Dec 07 20:12:23 image-203000 kubelet[2261]: I1207 20:12:23.374345    2261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e58289c5ad7ced1f93ff5f6b3f5c4760-kubeconfig\") pod \"kube-controller-manager-image-203000\" (UID: \"e58289c5ad7ced1f93ff5f6b3f5c4760\") " pod="kube-system/kube-controller-manager-image-203000"
	Dec 07 20:12:23 image-203000 kubelet[2261]: I1207 20:12:23.374354    2261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/4d48e7f0fc9e2ad634e455fa439aa1a2-etcd-data\") pod \"etcd-image-203000\" (UID: \"4d48e7f0fc9e2ad634e455fa439aa1a2\") " pod="kube-system/etcd-image-203000"
	Dec 07 20:12:23 image-203000 kubelet[2261]: I1207 20:12:23.374440    2261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4422a4056e231bb65675cd6a645fe32b-usr-share-ca-certificates\") pod \"kube-apiserver-image-203000\" (UID: \"4422a4056e231bb65675cd6a645fe32b\") " pod="kube-system/kube-apiserver-image-203000"
	Dec 07 20:12:23 image-203000 kubelet[2261]: I1207 20:12:23.374703    2261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e58289c5ad7ced1f93ff5f6b3f5c4760-usr-share-ca-certificates\") pod \"kube-controller-manager-image-203000\" (UID: \"e58289c5ad7ced1f93ff5f6b3f5c4760\") " pod="kube-system/kube-controller-manager-image-203000"
	Dec 07 20:12:23 image-203000 kubelet[2261]: I1207 20:12:23.374715    2261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1f7436c5e78962396a67a5629c7fe350-kubeconfig\") pod \"kube-scheduler-image-203000\" (UID: \"1f7436c5e78962396a67a5629c7fe350\") " pod="kube-system/kube-scheduler-image-203000"
	Dec 07 20:12:23 image-203000 kubelet[2261]: I1207 20:12:23.374724    2261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/4d48e7f0fc9e2ad634e455fa439aa1a2-etcd-certs\") pod \"etcd-image-203000\" (UID: \"4d48e7f0fc9e2ad634e455fa439aa1a2\") " pod="kube-system/etcd-image-203000"
	Dec 07 20:12:24 image-203000 kubelet[2261]: I1207 20:12:24.055141    2261 apiserver.go:52] "Watching apiserver"
	Dec 07 20:12:24 image-203000 kubelet[2261]: I1207 20:12:24.073686    2261 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Dec 07 20:12:24 image-203000 kubelet[2261]: E1207 20:12:24.132258    2261 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"etcd-image-203000\" already exists" pod="kube-system/etcd-image-203000"
	Dec 07 20:12:24 image-203000 kubelet[2261]: E1207 20:12:24.132545    2261 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-image-203000\" already exists" pod="kube-system/kube-apiserver-image-203000"
	Dec 07 20:12:24 image-203000 kubelet[2261]: I1207 20:12:24.132885    2261 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-image-203000" podStartSLOduration=1.132856133 podCreationTimestamp="2023-12-07 20:12:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-07 20:12:24.1327698 +0000 UTC m=+1.123356335" watchObservedRunningTime="2023-12-07 20:12:24.132856133 +0000 UTC m=+1.123442626"
	Dec 07 20:12:24 image-203000 kubelet[2261]: I1207 20:12:24.139375    2261 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-image-203000" podStartSLOduration=1.139359258 podCreationTimestamp="2023-12-07 20:12:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-07 20:12:24.13622205 +0000 UTC m=+1.126808585" watchObservedRunningTime="2023-12-07 20:12:24.139359258 +0000 UTC m=+1.129945835"
	Dec 07 20:12:24 image-203000 kubelet[2261]: I1207 20:12:24.142579    2261 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-image-203000" podStartSLOduration=1.142565842 podCreationTimestamp="2023-12-07 20:12:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-07 20:12:24.139304133 +0000 UTC m=+1.129890668" watchObservedRunningTime="2023-12-07 20:12:24.142565842 +0000 UTC m=+1.133152376"
	Dec 07 20:12:24 image-203000 kubelet[2261]: I1207 20:12:24.142623    2261 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-image-203000" podStartSLOduration=1.142616633 podCreationTimestamp="2023-12-07 20:12:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-12-07 20:12:24.142468467 +0000 UTC m=+1.133054960" watchObservedRunningTime="2023-12-07 20:12:24.142616633 +0000 UTC m=+1.133203168"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p image-203000 -n image-203000
helpers_test.go:261: (dbg) Run:  kubectl --context image-203000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestImageBuild/serial/BuildWithBuildArg]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context image-203000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context image-203000 describe pod storage-provisioner: exit status 1 (38.742875ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context image-203000 describe pod storage-provisioner: exit status 1
--- FAIL: TestImageBuild/serial/BuildWithBuildArg (1.08s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (54.85s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-427000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E1207 12:14:09.385064    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/client.crt: no such file or directory
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-427000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (16.492348833s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-427000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-427000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [feb944bc-6846-4c5b-93c1-a78c1c46a161] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [feb944bc-6846-4c5b-93c1-a78c1c46a161] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.020777875s
addons_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-427000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-427000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-427000 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.105.6
addons_test.go:296: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.6: exit status 1 (15.02981675s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:298: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.6" : exit status 1
addons_test.go:302: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:305: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-427000 addons disable ingress-dns --alsologtostderr -v=1
E1207 12:14:37.090642    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/client.crt: no such file or directory
addons_test.go:305: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-427000 addons disable ingress-dns --alsologtostderr -v=1: (5.933435167s)
addons_test.go:310: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-427000 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-427000 addons disable ingress --alsologtostderr -v=1: (7.080923375s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-427000 -n ingress-addon-legacy-427000
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-427000 logs -n 25
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                   Args                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| dashboard      | --url --port 36195                       | functional-469000           | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|                | -p functional-469000                     |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	| update-context | functional-469000                        | functional-469000           | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-469000                        | functional-469000           | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| update-context | functional-469000                        | functional-469000           | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|                | update-context                           |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                   |                             |         |         |                     |                     |
	| image          | functional-469000                        | functional-469000           | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|                | image ls --format short                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-469000                        | functional-469000           | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|                | image ls --format yaml                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| ssh            | functional-469000 ssh pgrep              | functional-469000           | jenkins | v1.32.0 | 07 Dec 23 12:11 PST |                     |
	|                | buildkitd                                |                             |         |         |                     |                     |
	| image          | functional-469000 image build -t         | functional-469000           | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|                | localhost/my-image:functional-469000     |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr         |                             |         |         |                     |                     |
	| image          | functional-469000 image ls               | functional-469000           | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	| image          | functional-469000                        | functional-469000           | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|                | image ls --format json                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| image          | functional-469000                        | functional-469000           | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	|                | image ls --format table                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                        |                             |         |         |                     |                     |
	| delete         | -p functional-469000                     | functional-469000           | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:11 PST |
	| start          | -p image-203000 --driver=qemu2           | image-203000                | jenkins | v1.32.0 | 07 Dec 23 12:11 PST | 07 Dec 23 12:12 PST |
	|                |                                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-203000                | jenkins | v1.32.0 | 07 Dec 23 12:12 PST | 07 Dec 23 12:12 PST |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | -p image-203000                          |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-203000                | jenkins | v1.32.0 | 07 Dec 23 12:12 PST | 07 Dec 23 12:12 PST |
	|                | --build-opt=build-arg=ENV_A=test_env_str |                             |         |         |                     |                     |
	|                | --build-opt=no-cache                     |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p       |                             |         |         |                     |                     |
	|                | image-203000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-203000                | jenkins | v1.32.0 | 07 Dec 23 12:12 PST | 07 Dec 23 12:12 PST |
	|                | ./testdata/image-build/test-normal       |                             |         |         |                     |                     |
	|                | --build-opt=no-cache -p                  |                             |         |         |                     |                     |
	|                | image-203000                             |                             |         |         |                     |                     |
	| image          | build -t aaa:latest                      | image-203000                | jenkins | v1.32.0 | 07 Dec 23 12:12 PST | 07 Dec 23 12:12 PST |
	|                | -f inner/Dockerfile                      |                             |         |         |                     |                     |
	|                | ./testdata/image-build/test-f            |                             |         |         |                     |                     |
	|                | -p image-203000                          |                             |         |         |                     |                     |
	| delete         | -p image-203000                          | image-203000                | jenkins | v1.32.0 | 07 Dec 23 12:12 PST | 07 Dec 23 12:12 PST |
	| start          | -p ingress-addon-legacy-427000           | ingress-addon-legacy-427000 | jenkins | v1.32.0 | 07 Dec 23 12:12 PST | 07 Dec 23 12:13 PST |
	|                | --kubernetes-version=v1.18.20            |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	|                | --driver=qemu2                           |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-427000              | ingress-addon-legacy-427000 | jenkins | v1.32.0 | 07 Dec 23 12:13 PST | 07 Dec 23 12:13 PST |
	|                | addons enable ingress                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-427000              | ingress-addon-legacy-427000 | jenkins | v1.32.0 | 07 Dec 23 12:13 PST | 07 Dec 23 12:13 PST |
	|                | addons enable ingress-dns                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                   |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-427000              | ingress-addon-legacy-427000 | jenkins | v1.32.0 | 07 Dec 23 12:14 PST | 07 Dec 23 12:14 PST |
	|                | ssh curl -s http://127.0.0.1/            |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'             |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-427000 ip           | ingress-addon-legacy-427000 | jenkins | v1.32.0 | 07 Dec 23 12:14 PST | 07 Dec 23 12:14 PST |
	| addons         | ingress-addon-legacy-427000              | ingress-addon-legacy-427000 | jenkins | v1.32.0 | 07 Dec 23 12:14 PST | 07 Dec 23 12:14 PST |
	|                | addons disable ingress-dns               |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-427000              | ingress-addon-legacy-427000 | jenkins | v1.32.0 | 07 Dec 23 12:14 PST | 07 Dec 23 12:14 PST |
	|                | addons disable ingress                   |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                   |                             |         |         |                     |                     |
	|----------------|------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 12:12:27
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 12:12:27.304873    2908 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:12:27.305039    2908 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:12:27.305042    2908 out.go:309] Setting ErrFile to fd 2...
	I1207 12:12:27.305044    2908 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:12:27.305167    2908 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:12:27.306281    2908 out.go:303] Setting JSON to false
	I1207 12:12:27.322620    2908 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2518,"bootTime":1701977429,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:12:27.322679    2908 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:12:27.327653    2908 out.go:177] * [ingress-addon-legacy-427000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:12:27.334610    2908 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:12:27.338593    2908 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:12:27.334668    2908 notify.go:220] Checking for updates...
	I1207 12:12:27.341673    2908 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:12:27.344585    2908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:12:27.347607    2908 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:12:27.350654    2908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:12:27.353849    2908 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:12:27.357620    2908 out.go:177] * Using the qemu2 driver based on user configuration
	I1207 12:12:27.364500    2908 start.go:298] selected driver: qemu2
	I1207 12:12:27.364508    2908 start.go:902] validating driver "qemu2" against <nil>
	I1207 12:12:27.364513    2908 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:12:27.366788    2908 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 12:12:27.369620    2908 out.go:177] * Automatically selected the socket_vmnet network
	I1207 12:12:27.372730    2908 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 12:12:27.372789    2908 cni.go:84] Creating CNI manager for ""
	I1207 12:12:27.372800    2908 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1207 12:12:27.372806    2908 start_flags.go:323] config:
	{Name:ingress-addon-legacy-427000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-427000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:12:27.377317    2908 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:12:27.384602    2908 out.go:177] * Starting control plane node ingress-addon-legacy-427000 in cluster ingress-addon-legacy-427000
	I1207 12:12:27.388643    2908 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1207 12:12:27.441112    2908 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I1207 12:12:27.441131    2908 cache.go:56] Caching tarball of preloaded images
	I1207 12:12:27.441299    2908 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1207 12:12:27.447622    2908 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1207 12:12:27.455584    2908 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1207 12:12:27.528805    2908 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I1207 12:12:37.299593    2908 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1207 12:12:37.299736    2908 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I1207 12:12:38.048912    2908 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I1207 12:12:38.049127    2908 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/config.json ...
	I1207 12:12:38.049147    2908 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/config.json: {Name:mk6cf029896f1aa8e7f4a26bfc6685acd88c51a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:12:38.049376    2908 start.go:365] acquiring machines lock for ingress-addon-legacy-427000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:12:38.049405    2908 start.go:369] acquired machines lock for "ingress-addon-legacy-427000" in 21.667µs
	I1207 12:12:38.049418    2908 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-427000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 K
ubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-427000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:12:38.049460    2908 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:12:38.053399    2908 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1207 12:12:38.068727    2908 start.go:159] libmachine.API.Create for "ingress-addon-legacy-427000" (driver="qemu2")
	I1207 12:12:38.068759    2908 client.go:168] LocalClient.Create starting
	I1207 12:12:38.068836    2908 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:12:38.068868    2908 main.go:141] libmachine: Decoding PEM data...
	I1207 12:12:38.068880    2908 main.go:141] libmachine: Parsing certificate...
	I1207 12:12:38.068918    2908 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:12:38.068940    2908 main.go:141] libmachine: Decoding PEM data...
	I1207 12:12:38.068947    2908 main.go:141] libmachine: Parsing certificate...
	I1207 12:12:38.069306    2908 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:12:38.188454    2908 main.go:141] libmachine: Creating SSH key...
	I1207 12:12:38.408193    2908 main.go:141] libmachine: Creating Disk image...
	I1207 12:12:38.408201    2908 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:12:38.408428    2908 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/ingress-addon-legacy-427000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/ingress-addon-legacy-427000/disk.qcow2
	I1207 12:12:38.421243    2908 main.go:141] libmachine: STDOUT: 
	I1207 12:12:38.421265    2908 main.go:141] libmachine: STDERR: 
	I1207 12:12:38.421329    2908 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/ingress-addon-legacy-427000/disk.qcow2 +20000M
	I1207 12:12:38.432019    2908 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:12:38.432035    2908 main.go:141] libmachine: STDERR: 
	I1207 12:12:38.432057    2908 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/ingress-addon-legacy-427000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/ingress-addon-legacy-427000/disk.qcow2
	I1207 12:12:38.432067    2908 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:12:38.432115    2908 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/ingress-addon-legacy-427000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/ingress-addon-legacy-427000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/ingress-addon-legacy-427000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:d9:33:1d:c8:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/ingress-addon-legacy-427000/disk.qcow2
	I1207 12:12:38.467828    2908 main.go:141] libmachine: STDOUT: 
	I1207 12:12:38.467851    2908 main.go:141] libmachine: STDERR: 
	I1207 12:12:38.467856    2908 main.go:141] libmachine: Attempt 0
	I1207 12:12:38.467871    2908 main.go:141] libmachine: Searching for 9e:d9:33:1d:c8:66 in /var/db/dhcpd_leases ...
	I1207 12:12:38.467937    2908 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1207 12:12:38.467956    2908 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:e2:a7:87:db:7c:3a ID:1,e2:a7:87:db:7c:3a Lease:0x65737896}
	I1207 12:12:38.467963    2908 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:65:79:21:45:fe ID:1,a:65:79:21:45:fe Lease:0x657377d5}
	I1207 12:12:38.467970    2908 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ce:6f:7f:8a:7d:64 ID:1,ce:6f:7f:8a:7d:64 Lease:0x65722646}
	I1207 12:12:38.467976    2908 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:12:b5:83:6f:32:61 ID:1,12:b5:83:6f:32:61 Lease:0x657225bb}
	I1207 12:12:38.467984    2908 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x657375c3}
	I1207 12:12:40.470104    2908 main.go:141] libmachine: Attempt 1
	I1207 12:12:40.470320    2908 main.go:141] libmachine: Searching for 9e:d9:33:1d:c8:66 in /var/db/dhcpd_leases ...
	I1207 12:12:40.470628    2908 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1207 12:12:40.470683    2908 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:e2:a7:87:db:7c:3a ID:1,e2:a7:87:db:7c:3a Lease:0x65737896}
	I1207 12:12:40.470716    2908 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:65:79:21:45:fe ID:1,a:65:79:21:45:fe Lease:0x657377d5}
	I1207 12:12:40.470750    2908 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ce:6f:7f:8a:7d:64 ID:1,ce:6f:7f:8a:7d:64 Lease:0x65722646}
	I1207 12:12:40.470785    2908 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:12:b5:83:6f:32:61 ID:1,12:b5:83:6f:32:61 Lease:0x657225bb}
	I1207 12:12:40.470815    2908 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x657375c3}
	I1207 12:12:42.473325    2908 main.go:141] libmachine: Attempt 2
	I1207 12:12:42.473535    2908 main.go:141] libmachine: Searching for 9e:d9:33:1d:c8:66 in /var/db/dhcpd_leases ...
	I1207 12:12:42.473855    2908 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1207 12:12:42.473905    2908 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:e2:a7:87:db:7c:3a ID:1,e2:a7:87:db:7c:3a Lease:0x65737896}
	I1207 12:12:42.473949    2908 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:65:79:21:45:fe ID:1,a:65:79:21:45:fe Lease:0x657377d5}
	I1207 12:12:42.473982    2908 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ce:6f:7f:8a:7d:64 ID:1,ce:6f:7f:8a:7d:64 Lease:0x65722646}
	I1207 12:12:42.474010    2908 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:12:b5:83:6f:32:61 ID:1,12:b5:83:6f:32:61 Lease:0x657225bb}
	I1207 12:12:42.474041    2908 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x657375c3}
	I1207 12:12:44.474276    2908 main.go:141] libmachine: Attempt 3
	I1207 12:12:44.474305    2908 main.go:141] libmachine: Searching for 9e:d9:33:1d:c8:66 in /var/db/dhcpd_leases ...
	I1207 12:12:44.474400    2908 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1207 12:12:44.474413    2908 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:e2:a7:87:db:7c:3a ID:1,e2:a7:87:db:7c:3a Lease:0x65737896}
	I1207 12:12:44.474418    2908 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:65:79:21:45:fe ID:1,a:65:79:21:45:fe Lease:0x657377d5}
	I1207 12:12:44.474422    2908 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ce:6f:7f:8a:7d:64 ID:1,ce:6f:7f:8a:7d:64 Lease:0x65722646}
	I1207 12:12:44.474427    2908 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:12:b5:83:6f:32:61 ID:1,12:b5:83:6f:32:61 Lease:0x657225bb}
	I1207 12:12:44.474435    2908 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x657375c3}
	I1207 12:12:46.476430    2908 main.go:141] libmachine: Attempt 4
	I1207 12:12:46.476441    2908 main.go:141] libmachine: Searching for 9e:d9:33:1d:c8:66 in /var/db/dhcpd_leases ...
	I1207 12:12:46.476479    2908 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1207 12:12:46.476485    2908 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:e2:a7:87:db:7c:3a ID:1,e2:a7:87:db:7c:3a Lease:0x65737896}
	I1207 12:12:46.476505    2908 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:65:79:21:45:fe ID:1,a:65:79:21:45:fe Lease:0x657377d5}
	I1207 12:12:46.476510    2908 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ce:6f:7f:8a:7d:64 ID:1,ce:6f:7f:8a:7d:64 Lease:0x65722646}
	I1207 12:12:46.476516    2908 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:12:b5:83:6f:32:61 ID:1,12:b5:83:6f:32:61 Lease:0x657225bb}
	I1207 12:12:46.476521    2908 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x657375c3}
	I1207 12:12:48.478519    2908 main.go:141] libmachine: Attempt 5
	I1207 12:12:48.478526    2908 main.go:141] libmachine: Searching for 9e:d9:33:1d:c8:66 in /var/db/dhcpd_leases ...
	I1207 12:12:48.478562    2908 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1207 12:12:48.478576    2908 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:e2:a7:87:db:7c:3a ID:1,e2:a7:87:db:7c:3a Lease:0x65737896}
	I1207 12:12:48.478581    2908 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:65:79:21:45:fe ID:1,a:65:79:21:45:fe Lease:0x657377d5}
	I1207 12:12:48.478587    2908 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ce:6f:7f:8a:7d:64 ID:1,ce:6f:7f:8a:7d:64 Lease:0x65722646}
	I1207 12:12:48.478595    2908 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:12:b5:83:6f:32:61 ID:1,12:b5:83:6f:32:61 Lease:0x657225bb}
	I1207 12:12:48.478601    2908 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x657375c3}
	I1207 12:12:50.480254    2908 main.go:141] libmachine: Attempt 6
	I1207 12:12:50.480275    2908 main.go:141] libmachine: Searching for 9e:d9:33:1d:c8:66 in /var/db/dhcpd_leases ...
	I1207 12:12:50.480350    2908 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I1207 12:12:50.480359    2908 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:e2:a7:87:db:7c:3a ID:1,e2:a7:87:db:7c:3a Lease:0x65737896}
	I1207 12:12:50.480365    2908 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:a:65:79:21:45:fe ID:1,a:65:79:21:45:fe Lease:0x657377d5}
	I1207 12:12:50.480370    2908 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:ce:6f:7f:8a:7d:64 ID:1,ce:6f:7f:8a:7d:64 Lease:0x65722646}
	I1207 12:12:50.480376    2908 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:12:b5:83:6f:32:61 ID:1,12:b5:83:6f:32:61 Lease:0x657225bb}
	I1207 12:12:50.480385    2908 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:da:98:b3:c0:e3:d9 ID:1,da:98:b3:c0:e3:d9 Lease:0x657375c3}
	I1207 12:12:52.482452    2908 main.go:141] libmachine: Attempt 7
	I1207 12:12:52.482480    2908 main.go:141] libmachine: Searching for 9e:d9:33:1d:c8:66 in /var/db/dhcpd_leases ...
	I1207 12:12:52.482618    2908 main.go:141] libmachine: Found 6 entries in /var/db/dhcpd_leases!
	I1207 12:12:52.482631    2908 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:9e:d9:33:1d:c8:66 ID:1,9e:d9:33:1d:c8:66 Lease:0x657378c3}
	I1207 12:12:52.482635    2908 main.go:141] libmachine: Found match: 9e:d9:33:1d:c8:66
	I1207 12:12:52.482650    2908 main.go:141] libmachine: IP: 192.168.105.6
	I1207 12:12:52.482656    2908 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I1207 12:12:54.503649    2908 machine.go:88] provisioning docker machine ...
	I1207 12:12:54.503706    2908 buildroot.go:166] provisioning hostname "ingress-addon-legacy-427000"
	I1207 12:12:54.503926    2908 main.go:141] libmachine: Using SSH client type: native
	I1207 12:12:54.504842    2908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c6aa70] 0x102c6d1e0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I1207 12:12:54.504875    2908 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-427000 && echo "ingress-addon-legacy-427000" | sudo tee /etc/hostname
	I1207 12:12:54.600277    2908 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-427000
	
	I1207 12:12:54.600408    2908 main.go:141] libmachine: Using SSH client type: native
	I1207 12:12:54.600853    2908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c6aa70] 0x102c6d1e0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I1207 12:12:54.600868    2908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-427000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-427000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-427000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1207 12:12:54.671279    2908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1207 12:12:54.671304    2908 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17719-1328/.minikube CaCertPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17719-1328/.minikube}
	I1207 12:12:54.671326    2908 buildroot.go:174] setting up certificates
	I1207 12:12:54.671335    2908 provision.go:83] configureAuth start
	I1207 12:12:54.671342    2908 provision.go:138] copyHostCerts
	I1207 12:12:54.671393    2908 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/17719-1328/.minikube/ca.pem
	I1207 12:12:54.671461    2908 exec_runner.go:144] found /Users/jenkins/minikube-integration/17719-1328/.minikube/ca.pem, removing ...
	I1207 12:12:54.671468    2908 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17719-1328/.minikube/ca.pem
	I1207 12:12:54.671690    2908 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17719-1328/.minikube/ca.pem (1078 bytes)
	I1207 12:12:54.671945    2908 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cert.pem
	I1207 12:12:54.671970    2908 exec_runner.go:144] found /Users/jenkins/minikube-integration/17719-1328/.minikube/cert.pem, removing ...
	I1207 12:12:54.671975    2908 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17719-1328/.minikube/cert.pem
	I1207 12:12:54.672048    2908 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17719-1328/.minikube/cert.pem (1123 bytes)
	I1207 12:12:54.672285    2908 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/17719-1328/.minikube/key.pem
	I1207 12:12:54.672328    2908 exec_runner.go:144] found /Users/jenkins/minikube-integration/17719-1328/.minikube/key.pem, removing ...
	I1207 12:12:54.672333    2908 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17719-1328/.minikube/key.pem
	I1207 12:12:54.672429    2908 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17719-1328/.minikube/key.pem (1679 bytes)
	I1207 12:12:54.672570    2908 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-427000 san=[192.168.105.6 192.168.105.6 localhost 127.0.0.1 minikube ingress-addon-legacy-427000]
	I1207 12:12:54.884115    2908 provision.go:172] copyRemoteCerts
	I1207 12:12:54.884160    2908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1207 12:12:54.884174    2908 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/ingress-addon-legacy-427000/id_rsa Username:docker}
	I1207 12:12:54.917614    2908 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1207 12:12:54.917670    2908 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1207 12:12:54.925176    2908 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1207 12:12:54.925212    2908 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1207 12:12:54.932540    2908 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1207 12:12:54.932591    2908 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1207 12:12:54.939623    2908 provision.go:86] duration metric: configureAuth took 268.288166ms
	I1207 12:12:54.939631    2908 buildroot.go:189] setting minikube options for container-runtime
	I1207 12:12:54.939732    2908 config.go:182] Loaded profile config "ingress-addon-legacy-427000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1207 12:12:54.939765    2908 main.go:141] libmachine: Using SSH client type: native
	I1207 12:12:54.939983    2908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c6aa70] 0x102c6d1e0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I1207 12:12:54.939990    2908 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1207 12:12:54.999693    2908 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1207 12:12:54.999704    2908 buildroot.go:70] root file system type: tmpfs
	I1207 12:12:54.999769    2908 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1207 12:12:54.999808    2908 main.go:141] libmachine: Using SSH client type: native
	I1207 12:12:55.000058    2908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c6aa70] 0x102c6d1e0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I1207 12:12:55.000093    2908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1207 12:12:55.066927    2908 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1207 12:12:55.066976    2908 main.go:141] libmachine: Using SSH client type: native
	I1207 12:12:55.067232    2908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c6aa70] 0x102c6d1e0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I1207 12:12:55.067242    2908 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1207 12:12:55.411842    2908 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1207 12:12:55.411854    2908 machine.go:91] provisioned docker machine in 908.201541ms
	I1207 12:12:55.411860    2908 client.go:171] LocalClient.Create took 17.343529459s
	I1207 12:12:55.411877    2908 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-427000" took 17.343585292s
	I1207 12:12:55.411884    2908 start.go:300] post-start starting for "ingress-addon-legacy-427000" (driver="qemu2")
	I1207 12:12:55.411888    2908 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1207 12:12:55.411956    2908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1207 12:12:55.411965    2908 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/ingress-addon-legacy-427000/id_rsa Username:docker}
	I1207 12:12:55.444155    2908 ssh_runner.go:195] Run: cat /etc/os-release
	I1207 12:12:55.445540    2908 info.go:137] Remote host: Buildroot 2021.02.12
	I1207 12:12:55.445551    2908 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17719-1328/.minikube/addons for local assets ...
	I1207 12:12:55.445620    2908 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17719-1328/.minikube/files for local assets ...
	I1207 12:12:55.445723    2908 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17719-1328/.minikube/files/etc/ssl/certs/17682.pem -> 17682.pem in /etc/ssl/certs
	I1207 12:12:55.445727    2908 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17719-1328/.minikube/files/etc/ssl/certs/17682.pem -> /etc/ssl/certs/17682.pem
	I1207 12:12:55.445842    2908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1207 12:12:55.448496    2908 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/files/etc/ssl/certs/17682.pem --> /etc/ssl/certs/17682.pem (1708 bytes)
	I1207 12:12:55.454843    2908 start.go:303] post-start completed in 42.95625ms
	I1207 12:12:55.455214    2908 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/config.json ...
	I1207 12:12:55.455398    2908 start.go:128] duration metric: createHost completed in 17.406368458s
	I1207 12:12:55.455427    2908 main.go:141] libmachine: Using SSH client type: native
	I1207 12:12:55.455646    2908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102c6aa70] 0x102c6d1e0 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I1207 12:12:55.455650    2908 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1207 12:12:55.518882    2908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701979975.746716794
	
	I1207 12:12:55.518893    2908 fix.go:206] guest clock: 1701979975.746716794
	I1207 12:12:55.518898    2908 fix.go:219] Guest: 2023-12-07 12:12:55.746716794 -0800 PST Remote: 2023-12-07 12:12:55.455401 -0800 PST m=+28.172566876 (delta=291.315794ms)
	I1207 12:12:55.518907    2908 fix.go:190] guest clock delta is within tolerance: 291.315794ms
	I1207 12:12:55.518910    2908 start.go:83] releasing machines lock for "ingress-addon-legacy-427000", held for 17.469936417s
	I1207 12:12:55.519181    2908 ssh_runner.go:195] Run: cat /version.json
	I1207 12:12:55.519190    2908 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/ingress-addon-legacy-427000/id_rsa Username:docker}
	I1207 12:12:55.519204    2908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1207 12:12:55.519225    2908 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/ingress-addon-legacy-427000/id_rsa Username:docker}
	I1207 12:12:55.598285    2908 ssh_runner.go:195] Run: systemctl --version
	I1207 12:12:55.601132    2908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1207 12:12:55.603623    2908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1207 12:12:55.603654    2908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1207 12:12:55.607380    2908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1207 12:12:55.613450    2908 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1207 12:12:55.613458    2908 start.go:475] detecting cgroup driver to use...
	I1207 12:12:55.613535    2908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 12:12:55.621037    2908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1207 12:12:55.624521    2908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1207 12:12:55.627605    2908 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1207 12:12:55.627638    2908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1207 12:12:55.630553    2908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1207 12:12:55.633750    2908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1207 12:12:55.637047    2908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1207 12:12:55.640335    2908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1207 12:12:55.643522    2908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1207 12:12:55.646374    2908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1207 12:12:55.649599    2908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1207 12:12:55.652763    2908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 12:12:55.717202    2908 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1207 12:12:55.723474    2908 start.go:475] detecting cgroup driver to use...
	I1207 12:12:55.723551    2908 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1207 12:12:55.730223    2908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 12:12:55.735377    2908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1207 12:12:55.741781    2908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1207 12:12:55.746833    2908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1207 12:12:55.751291    2908 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1207 12:12:55.796213    2908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1207 12:12:55.801424    2908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1207 12:12:55.806811    2908 ssh_runner.go:195] Run: which cri-dockerd
	I1207 12:12:55.808079    2908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1207 12:12:55.810664    2908 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1207 12:12:55.815990    2908 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1207 12:12:55.885077    2908 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1207 12:12:55.964939    2908 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1207 12:12:55.965003    2908 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1207 12:12:55.970490    2908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 12:12:56.050281    2908 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1207 12:12:57.206132    2908 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.155863458s)
	I1207 12:12:57.206198    2908 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1207 12:12:57.221851    2908 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1207 12:12:57.238945    2908 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.7 ...
	I1207 12:12:57.239009    2908 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I1207 12:12:57.240396    2908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 12:12:57.244073    2908 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1207 12:12:57.244113    2908 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1207 12:12:57.252616    2908 docker.go:671] Got preloaded images: 
	I1207 12:12:57.252623    2908 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1207 12:12:57.252659    2908 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1207 12:12:57.255615    2908 ssh_runner.go:195] Run: which lz4
	I1207 12:12:57.256953    2908 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I1207 12:12:57.257051    2908 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1207 12:12:57.258395    2908 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1207 12:12:57.258402    2908 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I1207 12:12:58.944171    2908 docker.go:635] Took 1.687211 seconds to copy over tarball
	I1207 12:12:58.944237    2908 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1207 12:13:00.287018    2908 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.342796041s)
	I1207 12:13:00.287070    2908 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1207 12:13:00.311108    2908 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1207 12:13:00.317619    2908 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I1207 12:13:00.323244    2908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1207 12:13:00.402963    2908 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1207 12:13:02.087460    2908 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.684521333s)
	I1207 12:13:02.087552    2908 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1207 12:13:02.095405    2908 docker.go:671] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1207 12:13:02.095413    2908 docker.go:677] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I1207 12:13:02.095417    2908 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1207 12:13:02.110615    2908 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1207 12:13:02.110690    2908 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1207 12:13:02.111034    2908 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 12:13:02.111045    2908 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1207 12:13:02.111152    2908 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1207 12:13:02.111322    2908 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1207 12:13:02.111377    2908 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1207 12:13:02.114816    2908 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1207 12:13:02.119269    2908 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1207 12:13:02.119327    2908 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1207 12:13:02.119357    2908 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1207 12:13:02.120237    2908 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1207 12:13:02.120439    2908 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 12:13:02.120484    2908 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1207 12:13:02.120518    2908 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1207 12:13:02.122654    2908 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	W1207 12:13:02.791701    2908 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	W1207 12:13:02.791700    2908 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1207 12:13:02.792321    2908 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1207 12:13:02.792322    2908 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	W1207 12:13:02.793988    2908 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	W1207 12:13:02.794203    2908 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I1207 12:13:02.794319    2908 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1207 12:13:02.794556    2908 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	W1207 12:13:02.818956    2908 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1207 12:13:02.819272    2908 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1207 12:13:02.827625    2908 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1207 12:13:02.828865    2908 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I1207 12:13:02.828895    2908 docker.go:323] Removing image: registry.k8s.io/coredns:1.6.7
	I1207 12:13:02.828952    2908 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	I1207 12:13:02.829450    2908 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I1207 12:13:02.829477    2908 docker.go:323] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1207 12:13:02.829526    2908 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1207 12:13:02.834276    2908 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I1207 12:13:02.834306    2908 docker.go:323] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1207 12:13:02.834362    2908 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	W1207 12:13:02.839382    2908 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I1207 12:13:02.839584    2908 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1207 12:13:02.844591    2908 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I1207 12:13:02.844618    2908 docker.go:323] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1207 12:13:02.844681    2908 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I1207 12:13:02.855114    2908 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I1207 12:13:02.855137    2908 docker.go:323] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1207 12:13:02.855199    2908 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I1207 12:13:02.855368    2908 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I1207 12:13:02.855381    2908 docker.go:323] Removing image: registry.k8s.io/pause:3.2
	I1207 12:13:02.855415    2908 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I1207 12:13:02.864027    2908 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	I1207 12:13:02.864070    2908 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I1207 12:13:02.866566    2908 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I1207 12:13:02.870171    2908 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I1207 12:13:02.870197    2908 docker.go:323] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1207 12:13:02.870253    2908 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1207 12:13:02.873534    2908 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	I1207 12:13:02.879173    2908 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I1207 12:13:02.879186    2908 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	I1207 12:13:02.880193    2908 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	W1207 12:13:03.315765    2908 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1207 12:13:03.316203    2908 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 12:13:03.343411    2908 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1207 12:13:03.343464    2908 docker.go:323] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 12:13:03.343581    2908 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 12:13:03.367768    2908 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1207 12:13:03.367845    2908 cache_images.go:92] LoadImages completed in 1.272452458s
	W1207 12:13:03.367903    2908 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20: no such file or directory
	I1207 12:13:03.367990    2908 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1207 12:13:03.387881    2908 cni.go:84] Creating CNI manager for ""
	I1207 12:13:03.387897    2908 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1207 12:13:03.387909    2908 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1207 12:13:03.387923    2908 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.6 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-427000 NodeName:ingress-addon-legacy-427000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1207 12:13:03.388026    2908 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-427000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1207 12:13:03.388077    2908 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-427000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-427000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1207 12:13:03.388156    2908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1207 12:13:03.392720    2908 binaries.go:44] Found k8s binaries, skipping transfer
	I1207 12:13:03.392761    2908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1207 12:13:03.396777    2908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	I1207 12:13:03.402842    2908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1207 12:13:03.408243    2908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2127 bytes)
	I1207 12:13:03.413615    2908 ssh_runner.go:195] Run: grep 192.168.105.6	control-plane.minikube.internal$ /etc/hosts
	I1207 12:13:03.415080    2908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1207 12:13:03.418504    2908 certs.go:56] Setting up /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000 for IP: 192.168.105.6
	I1207 12:13:03.418514    2908 certs.go:190] acquiring lock for shared ca certs: {Name:mka2d4ba9e36871ccc0bd079595857e1e300747f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:13:03.418651    2908 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/17719-1328/.minikube/ca.key
	I1207 12:13:03.418693    2908 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/17719-1328/.minikube/proxy-client-ca.key
	I1207 12:13:03.418719    2908 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/client.key
	I1207 12:13:03.418728    2908 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/client.crt with IP's: []
	I1207 12:13:03.562330    2908 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/client.crt ...
	I1207 12:13:03.562335    2908 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/client.crt: {Name:mka2bcca3a84aebaa48f4b63391d6ff8e086496e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:13:03.562601    2908 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/client.key ...
	I1207 12:13:03.562605    2908 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/client.key: {Name:mkbf121870f3cc1916125fa8fe8de8fe4d775300 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:13:03.562726    2908 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/apiserver.key.b354f644
	I1207 12:13:03.562733    2908 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/apiserver.crt.b354f644 with IP's: [192.168.105.6 10.96.0.1 127.0.0.1 10.0.0.1]
	I1207 12:13:03.604778    2908 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/apiserver.crt.b354f644 ...
	I1207 12:13:03.604784    2908 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/apiserver.crt.b354f644: {Name:mk285eb70ff25dde9ac31fae0757c17b6a1efac9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:13:03.604939    2908 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/apiserver.key.b354f644 ...
	I1207 12:13:03.604942    2908 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/apiserver.key.b354f644: {Name:mk94ad7ac64104993f49891fa4b683745ad402a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:13:03.605057    2908 certs.go:337] copying /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/apiserver.crt.b354f644 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/apiserver.crt
	I1207 12:13:03.605154    2908 certs.go:341] copying /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/apiserver.key.b354f644 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/apiserver.key
	I1207 12:13:03.605304    2908 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/proxy-client.key
	I1207 12:13:03.605314    2908 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/proxy-client.crt with IP's: []
	I1207 12:13:03.721134    2908 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/proxy-client.crt ...
	I1207 12:13:03.721139    2908 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/proxy-client.crt: {Name:mkb2c96a7d0564d3cdd91e84c2667393f1c74ca4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:13:03.721340    2908 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/proxy-client.key ...
	I1207 12:13:03.721344    2908 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/proxy-client.key: {Name:mk215423965e4bc46ac53d24470eab4a8ff49b29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:13:03.721469    2908 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1207 12:13:03.721484    2908 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1207 12:13:03.721497    2908 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1207 12:13:03.721507    2908 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1207 12:13:03.721517    2908 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17719-1328/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1207 12:13:03.721528    2908 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17719-1328/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1207 12:13:03.721539    2908 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17719-1328/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1207 12:13:03.721549    2908 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17719-1328/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1207 12:13:03.721639    2908 certs.go:437] found cert: /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/1768.pem (1338 bytes)
	W1207 12:13:03.721672    2908 certs.go:433] ignoring /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/1768_empty.pem, impossibly tiny 0 bytes
	I1207 12:13:03.721679    2908 certs.go:437] found cert: /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca-key.pem (1679 bytes)
	I1207 12:13:03.721706    2908 certs.go:437] found cert: /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem (1078 bytes)
	I1207 12:13:03.721737    2908 certs.go:437] found cert: /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem (1123 bytes)
	I1207 12:13:03.721772    2908 certs.go:437] found cert: /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/Users/jenkins/minikube-integration/17719-1328/.minikube/certs/key.pem (1679 bytes)
	I1207 12:13:03.721834    2908 certs.go:437] found cert: /Users/jenkins/minikube-integration/17719-1328/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/17719-1328/.minikube/files/etc/ssl/certs/17682.pem (1708 bytes)
	I1207 12:13:03.721872    2908 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17719-1328/.minikube/files/etc/ssl/certs/17682.pem -> /usr/share/ca-certificates/17682.pem
	I1207 12:13:03.721883    2908 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17719-1328/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1207 12:13:03.721893    2908 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/1768.pem -> /usr/share/ca-certificates/1768.pem
	I1207 12:13:03.722229    2908 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1207 12:13:03.729834    2908 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1207 12:13:03.737279    2908 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1207 12:13:03.744452    2908 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1207 12:13:03.751485    2908 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1207 12:13:03.758220    2908 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1207 12:13:03.765425    2908 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1207 12:13:03.772854    2908 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1207 12:13:03.780030    2908 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/files/etc/ssl/certs/17682.pem --> /usr/share/ca-certificates/17682.pem (1708 bytes)
	I1207 12:13:03.786773    2908 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1207 12:13:03.793733    2908 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/1768.pem --> /usr/share/ca-certificates/1768.pem (1338 bytes)
	I1207 12:13:03.801392    2908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1207 12:13:03.806635    2908 ssh_runner.go:195] Run: openssl version
	I1207 12:13:03.808550    2908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17682.pem && ln -fs /usr/share/ca-certificates/17682.pem /etc/ssl/certs/17682.pem"
	I1207 12:13:03.811562    2908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17682.pem
	I1207 12:13:03.813000    2908 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Dec  7 20:08 /usr/share/ca-certificates/17682.pem
	I1207 12:13:03.813025    2908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17682.pem
	I1207 12:13:03.814991    2908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17682.pem /etc/ssl/certs/3ec20f2e.0"
	I1207 12:13:03.817979    2908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1207 12:13:03.821393    2908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1207 12:13:03.822976    2908 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Dec  7 20:02 /usr/share/ca-certificates/minikubeCA.pem
	I1207 12:13:03.822995    2908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1207 12:13:03.824757    2908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1207 12:13:03.827743    2908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1768.pem && ln -fs /usr/share/ca-certificates/1768.pem /etc/ssl/certs/1768.pem"
	I1207 12:13:03.830584    2908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1768.pem
	I1207 12:13:03.832116    2908 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Dec  7 20:08 /usr/share/ca-certificates/1768.pem
	I1207 12:13:03.832134    2908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1768.pem
	I1207 12:13:03.833956    2908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1768.pem /etc/ssl/certs/51391683.0"
	I1207 12:13:03.837394    2908 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1207 12:13:03.838707    2908 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1207 12:13:03.838736    2908 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-427000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.18.20 ClusterName:ingress-addon-legacy-427000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:13:03.838797    2908 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1207 12:13:03.844164    2908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1207 12:13:03.846964    2908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1207 12:13:03.849764    2908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1207 12:13:03.852912    2908 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1207 12:13:03.852924    2908 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1207 12:13:03.877569    2908 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1207 12:13:03.877707    2908 kubeadm.go:322] [preflight] Running pre-flight checks
	I1207 12:13:03.964117    2908 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1207 12:13:03.964183    2908 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1207 12:13:03.964237    2908 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1207 12:13:04.010239    2908 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1207 12:13:04.010999    2908 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1207 12:13:04.011043    2908 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1207 12:13:04.087202    2908 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1207 12:13:04.095427    2908 out.go:204]   - Generating certificates and keys ...
	I1207 12:13:04.095469    2908 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1207 12:13:04.095527    2908 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1207 12:13:04.159970    2908 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1207 12:13:04.347950    2908 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1207 12:13:04.459862    2908 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1207 12:13:04.500094    2908 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1207 12:13:04.659369    2908 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1207 12:13:04.659480    2908 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-427000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I1207 12:13:04.845255    2908 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1207 12:13:04.845324    2908 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-427000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I1207 12:13:05.058410    2908 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1207 12:13:05.137387    2908 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1207 12:13:05.283930    2908 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1207 12:13:05.283964    2908 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1207 12:13:05.315242    2908 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1207 12:13:05.479062    2908 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1207 12:13:05.693262    2908 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1207 12:13:05.736479    2908 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1207 12:13:05.736768    2908 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1207 12:13:05.741398    2908 out.go:204]   - Booting up control plane ...
	I1207 12:13:05.741472    2908 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1207 12:13:05.741529    2908 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1207 12:13:05.741585    2908 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1207 12:13:05.741630    2908 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1207 12:13:05.742770    2908 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1207 12:13:17.245337    2908 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.502573 seconds
	I1207 12:13:17.245469    2908 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1207 12:13:17.259218    2908 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1207 12:13:17.809542    2908 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1207 12:13:17.809829    2908 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-427000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1207 12:13:18.315870    2908 kubeadm.go:322] [bootstrap-token] Using token: mdhqj5.zo3w8bj7bk1mgvf6
	I1207 12:13:18.320597    2908 out.go:204]   - Configuring RBAC rules ...
	I1207 12:13:18.320657    2908 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1207 12:13:18.326964    2908 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1207 12:13:18.330012    2908 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1207 12:13:18.331134    2908 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1207 12:13:18.332032    2908 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1207 12:13:18.332925    2908 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1207 12:13:18.336105    2908 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1207 12:13:18.540076    2908 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1207 12:13:18.728606    2908 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1207 12:13:18.729278    2908 kubeadm.go:322] 
	I1207 12:13:18.729321    2908 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1207 12:13:18.729327    2908 kubeadm.go:322] 
	I1207 12:13:18.729401    2908 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1207 12:13:18.729410    2908 kubeadm.go:322] 
	I1207 12:13:18.729430    2908 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1207 12:13:18.729466    2908 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1207 12:13:18.729507    2908 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1207 12:13:18.729531    2908 kubeadm.go:322] 
	I1207 12:13:18.729568    2908 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1207 12:13:18.729650    2908 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1207 12:13:18.729714    2908 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1207 12:13:18.729724    2908 kubeadm.go:322] 
	I1207 12:13:18.729796    2908 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1207 12:13:18.729865    2908 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1207 12:13:18.729871    2908 kubeadm.go:322] 
	I1207 12:13:18.729931    2908 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token mdhqj5.zo3w8bj7bk1mgvf6 \
	I1207 12:13:18.730007    2908 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:828939e74f1d12618d8bb944cf208455a494cd79da1e765a74ad9e48dba341a3 \
	I1207 12:13:18.730026    2908 kubeadm.go:322]     --control-plane 
	I1207 12:13:18.730029    2908 kubeadm.go:322] 
	I1207 12:13:18.730085    2908 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1207 12:13:18.730093    2908 kubeadm.go:322] 
	I1207 12:13:18.730163    2908 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token mdhqj5.zo3w8bj7bk1mgvf6 \
	I1207 12:13:18.730241    2908 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:828939e74f1d12618d8bb944cf208455a494cd79da1e765a74ad9e48dba341a3 
	I1207 12:13:18.730456    2908 kubeadm.go:322] W1207 20:13:04.105591    1317 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1207 12:13:18.730585    2908 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I1207 12:13:18.730703    2908 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.7. Latest validated version: 19.03
	I1207 12:13:18.730790    2908 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1207 12:13:18.730919    2908 kubeadm.go:322] W1207 20:13:05.968158    1317 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1207 12:13:18.731019    2908 kubeadm.go:322] W1207 20:13:05.968858    1317 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1207 12:13:18.731027    2908 cni.go:84] Creating CNI manager for ""
	I1207 12:13:18.731036    2908 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1207 12:13:18.731052    2908 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1207 12:13:18.731121    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c minikube.k8s.io/name=ingress-addon-legacy-427000 minikube.k8s.io/updated_at=2023_12_07T12_13_18_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:18.731133    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:18.801202    2908 ops.go:34] apiserver oom_adj: -16
	I1207 12:13:18.801233    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:18.834321    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:19.370989    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:19.870995    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:20.370990    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:20.870905    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:21.370915    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:21.870975    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:22.370859    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:22.870920    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:23.370907    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:23.870823    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:24.370926    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:24.870872    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:25.370888    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:25.869036    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:26.370842    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:26.870819    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:27.370482    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:27.870497    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:28.370717    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:28.870593    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:29.370787    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:29.870744    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:30.370610    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:30.869118    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:31.370697    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:31.870653    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:32.370636    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:32.870453    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:33.370681    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:33.870723    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:34.370734    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:34.870428    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:35.370381    2908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1207 12:13:35.410281    2908 kubeadm.go:1088] duration metric: took 16.67963875s to wait for elevateKubeSystemPrivileges.
	I1207 12:13:35.410299    2908 kubeadm.go:406] StartCluster complete in 31.572350709s
	I1207 12:13:35.410309    2908 settings.go:142] acquiring lock: {Name:mk64a7588accf4b6bd8e16cdbaa1b2c1768d52b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:13:35.410388    2908 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:13:35.410743    2908 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/kubeconfig: {Name:mk1f9e67cb7d73aba54460262958078aba7f1051 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:13:35.411656    2908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1207 12:13:35.411719    2908 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1207 12:13:35.411761    2908 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-427000"
	I1207 12:13:35.411771    2908 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-427000"
	I1207 12:13:35.411781    2908 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-427000"
	I1207 12:13:35.411792    2908 host.go:66] Checking if "ingress-addon-legacy-427000" exists ...
	I1207 12:13:35.411796    2908 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-427000"
	I1207 12:13:35.411966    2908 config.go:182] Loaded profile config "ingress-addon-legacy-427000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1207 12:13:35.411948    2908 kapi.go:59] client config for ingress-addon-legacy-427000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/client.key", CAFile:"/Users/jenkins/minikube-integration/17719-1328/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103f47060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 12:13:35.412184    2908 retry.go:31] will retry after 1.335281889s: connect: dial unix /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/ingress-addon-legacy-427000/monitor: connect: connection refused
	I1207 12:13:35.412310    2908 cert_rotation.go:137] Starting client certificate rotation controller
	I1207 12:13:35.413046    2908 kapi.go:59] client config for ingress-addon-legacy-427000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/client.key", CAFile:"/Users/jenkins/minikube-integration/17719-1328/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103f47060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 12:13:35.413148    2908 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-427000"
	I1207 12:13:35.413157    2908 host.go:66] Checking if "ingress-addon-legacy-427000" exists ...
	I1207 12:13:35.413857    2908 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1207 12:13:35.413862    2908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1207 12:13:35.413868    2908 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/ingress-addon-legacy-427000/id_rsa Username:docker}
	I1207 12:13:35.420249    2908 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-427000" context rescaled to 1 replicas
	I1207 12:13:35.420266    2908 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:13:35.424935    2908 out.go:177] * Verifying Kubernetes components...
	I1207 12:13:35.431961    2908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 12:13:35.462494    2908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1207 12:13:35.462560    2908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1207 12:13:35.462814    2908 kapi.go:59] client config for ingress-addon-legacy-427000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/client.key", CAFile:"/Users/jenkins/minikube-integration/17719-1328/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint
8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103f47060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1207 12:13:35.462983    2908 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-427000" to be "Ready" ...
	I1207 12:13:35.464544    2908 node_ready.go:49] node "ingress-addon-legacy-427000" has status "Ready":"True"
	I1207 12:13:35.464551    2908 node_ready.go:38] duration metric: took 1.561292ms waiting for node "ingress-addon-legacy-427000" to be "Ready" ...
	I1207 12:13:35.464555    2908 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 12:13:35.467733    2908 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-427000" in "kube-system" namespace to be "Ready" ...
	I1207 12:13:35.469954    2908 pod_ready.go:92] pod "etcd-ingress-addon-legacy-427000" in "kube-system" namespace has status "Ready":"True"
	I1207 12:13:35.469962    2908 pod_ready.go:81] duration metric: took 2.219ms waiting for pod "etcd-ingress-addon-legacy-427000" in "kube-system" namespace to be "Ready" ...
	I1207 12:13:35.469966    2908 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-427000" in "kube-system" namespace to be "Ready" ...
	I1207 12:13:35.475720    2908 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-427000" in "kube-system" namespace has status "Ready":"True"
	I1207 12:13:35.475729    2908 pod_ready.go:81] duration metric: took 5.7595ms waiting for pod "kube-apiserver-ingress-addon-legacy-427000" in "kube-system" namespace to be "Ready" ...
	I1207 12:13:35.475734    2908 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-427000" in "kube-system" namespace to be "Ready" ...
	I1207 12:13:35.478934    2908 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-427000" in "kube-system" namespace has status "Ready":"True"
	I1207 12:13:35.478940    2908 pod_ready.go:81] duration metric: took 3.203291ms waiting for pod "kube-controller-manager-ingress-addon-legacy-427000" in "kube-system" namespace to be "Ready" ...
	I1207 12:13:35.478944    2908 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-js7lt" in "kube-system" namespace to be "Ready" ...
	I1207 12:13:35.665118    2908 request.go:629] Waited for 184.037417ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods/kube-proxy-js7lt
	I1207 12:13:35.709823    2908 start.go:929] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I1207 12:13:35.865085    2908 request.go:629] Waited for 198.4985ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-427000
	I1207 12:13:36.371140    2908 pod_ready.go:92] pod "kube-proxy-js7lt" in "kube-system" namespace has status "Ready":"True"
	I1207 12:13:36.371153    2908 pod_ready.go:81] duration metric: took 892.227334ms waiting for pod "kube-proxy-js7lt" in "kube-system" namespace to be "Ready" ...
	I1207 12:13:36.371160    2908 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-427000" in "kube-system" namespace to be "Ready" ...
	I1207 12:13:36.465032    2908 request.go:629] Waited for 93.847375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-427000
	I1207 12:13:36.665115    2908 request.go:629] Waited for 197.908708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-427000
	I1207 12:13:36.673302    2908 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-427000" in "kube-system" namespace has status "Ready":"True"
	I1207 12:13:36.673337    2908 pod_ready.go:81] duration metric: took 302.174792ms waiting for pod "kube-scheduler-ingress-addon-legacy-427000" in "kube-system" namespace to be "Ready" ...
	I1207 12:13:36.673365    2908 pod_ready.go:38] duration metric: took 1.208829167s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1207 12:13:36.673408    2908 api_server.go:52] waiting for apiserver process to appear ...
	I1207 12:13:36.673665    2908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1207 12:13:36.690871    2908 api_server.go:72] duration metric: took 1.270619208s to wait for apiserver process to appear ...
	I1207 12:13:36.690892    2908 api_server.go:88] waiting for apiserver healthz status ...
	I1207 12:13:36.690909    2908 api_server.go:253] Checking apiserver healthz at https://192.168.105.6:8443/healthz ...
	I1207 12:13:36.699172    2908 api_server.go:279] https://192.168.105.6:8443/healthz returned 200:
	ok
	I1207 12:13:36.700371    2908 api_server.go:141] control plane version: v1.18.20
	I1207 12:13:36.700388    2908 api_server.go:131] duration metric: took 9.488042ms to wait for apiserver health ...
	I1207 12:13:36.700396    2908 system_pods.go:43] waiting for kube-system pods to appear ...
	I1207 12:13:36.756708    2908 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 12:13:36.759926    2908 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 12:13:36.759947    2908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1207 12:13:36.759973    2908 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/ingress-addon-legacy-427000/id_rsa Username:docker}
	I1207 12:13:36.810796    2908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1207 12:13:36.864956    2908 request.go:629] Waited for 164.520375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I1207 12:13:36.867721    2908 system_pods.go:59] 6 kube-system pods found
	I1207 12:13:36.867733    2908 system_pods.go:61] "coredns-66bff467f8-mmc7n" [00fd0b3e-acdf-426e-92cb-b441a6defb50] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 12:13:36.867736    2908 system_pods.go:61] "etcd-ingress-addon-legacy-427000" [d858866c-bb82-4c36-9fdb-93045effba70] Running
	I1207 12:13:36.867740    2908 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-427000" [1ed96524-f91c-47ed-aef4-8e16afda114a] Running
	I1207 12:13:36.867745    2908 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-427000" [0ec52b07-ea1c-4c86-a4d8-8ef74b684fe1] Running
	I1207 12:13:36.867747    2908 system_pods.go:61] "kube-proxy-js7lt" [7ee4a494-0777-49d4-97e4-0fa4f4e3e93c] Running
	I1207 12:13:36.867751    2908 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-427000" [e86b8bf8-158c-4305-a133-117010e6617a] Running
	I1207 12:13:36.867754    2908 system_pods.go:74] duration metric: took 167.357084ms to wait for pod list to return data ...
	I1207 12:13:36.867760    2908 default_sa.go:34] waiting for default service account to be created ...
	I1207 12:13:36.903404    2908 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1207 12:13:36.907302    2908 addons.go:502] enable addons completed in 1.495620959s: enabled=[default-storageclass storage-provisioner]
	I1207 12:13:37.065043    2908 request.go:629] Waited for 197.235375ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/default/serviceaccounts
	I1207 12:13:37.067220    2908 default_sa.go:45] found service account: "default"
	I1207 12:13:37.067235    2908 default_sa.go:55] duration metric: took 199.475375ms for default service account to be created ...
	I1207 12:13:37.067241    2908 system_pods.go:116] waiting for k8s-apps to be running ...
	I1207 12:13:37.265017    2908 request.go:629] Waited for 197.739167ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I1207 12:13:37.268647    2908 system_pods.go:86] 7 kube-system pods found
	I1207 12:13:37.268660    2908 system_pods.go:89] "coredns-66bff467f8-mmc7n" [00fd0b3e-acdf-426e-92cb-b441a6defb50] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1207 12:13:37.268665    2908 system_pods.go:89] "etcd-ingress-addon-legacy-427000" [d858866c-bb82-4c36-9fdb-93045effba70] Running
	I1207 12:13:37.268669    2908 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-427000" [1ed96524-f91c-47ed-aef4-8e16afda114a] Running
	I1207 12:13:37.268672    2908 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-427000" [0ec52b07-ea1c-4c86-a4d8-8ef74b684fe1] Running
	I1207 12:13:37.268675    2908 system_pods.go:89] "kube-proxy-js7lt" [7ee4a494-0777-49d4-97e4-0fa4f4e3e93c] Running
	I1207 12:13:37.268678    2908 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-427000" [e86b8bf8-158c-4305-a133-117010e6617a] Running
	I1207 12:13:37.268681    2908 system_pods.go:89] "storage-provisioner" [ec2d9924-3dd6-41c9-9623-86271b61b358] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1207 12:13:37.268684    2908 system_pods.go:126] duration metric: took 201.442083ms to wait for k8s-apps to be running ...
	I1207 12:13:37.268690    2908 system_svc.go:44] waiting for kubelet service to be running ....
	I1207 12:13:37.268741    2908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1207 12:13:37.274761    2908 system_svc.go:56] duration metric: took 6.068417ms WaitForService to wait for kubelet.
	I1207 12:13:37.274774    2908 kubeadm.go:581] duration metric: took 1.854543208s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1207 12:13:37.274785    2908 node_conditions.go:102] verifying NodePressure condition ...
	I1207 12:13:37.465032    2908 request.go:629] Waited for 190.210166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes
	I1207 12:13:37.467502    2908 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I1207 12:13:37.467523    2908 node_conditions.go:123] node cpu capacity is 2
	I1207 12:13:37.467534    2908 node_conditions.go:105] duration metric: took 192.750375ms to run NodePressure ...
	I1207 12:13:37.467545    2908 start.go:228] waiting for startup goroutines ...
	I1207 12:13:37.467552    2908 start.go:233] waiting for cluster config update ...
	I1207 12:13:37.467562    2908 start.go:242] writing updated cluster config ...
	I1207 12:13:37.467999    2908 ssh_runner.go:195] Run: rm -f paused
	I1207 12:13:37.508205    2908 start.go:600] kubectl: 1.28.2, cluster: 1.18.20 (minor skew: 10)
	I1207 12:13:37.512084    2908 out.go:177] 
	W1207 12:13:37.516095    2908 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.18.20.
	I1207 12:13:37.519967    2908 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1207 12:13:37.527931    2908 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-427000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Thu 2023-12-07 20:12:51 UTC, ends at Thu 2023-12-07 20:14:47 UTC. --
	Dec 07 20:14:23 ingress-addon-legacy-427000 dockerd[1000]: time="2023-12-07T20:14:23.165929359Z" level=info msg="shim disconnected" id=5755716ccf09652659174cd3dc3a92e9a3f68b916e8dbbbb9410ea0a3413d4d0 namespace=moby
	Dec 07 20:14:23 ingress-addon-legacy-427000 dockerd[1000]: time="2023-12-07T20:14:23.165952400Z" level=warning msg="cleaning up after shim disconnected" id=5755716ccf09652659174cd3dc3a92e9a3f68b916e8dbbbb9410ea0a3413d4d0 namespace=moby
	Dec 07 20:14:23 ingress-addon-legacy-427000 dockerd[1000]: time="2023-12-07T20:14:23.165956566Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 07 20:14:35 ingress-addon-legacy-427000 dockerd[994]: time="2023-12-07T20:14:35.252342870Z" level=info msg="ignoring event" container=a643d54627401a2fd825fca0510fac07bf6cb166cd59e70d3378fb54852efcbc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:14:35 ingress-addon-legacy-427000 dockerd[1000]: time="2023-12-07T20:14:35.252775865Z" level=info msg="shim disconnected" id=a643d54627401a2fd825fca0510fac07bf6cb166cd59e70d3378fb54852efcbc namespace=moby
	Dec 07 20:14:35 ingress-addon-legacy-427000 dockerd[1000]: time="2023-12-07T20:14:35.252825615Z" level=warning msg="cleaning up after shim disconnected" id=a643d54627401a2fd825fca0510fac07bf6cb166cd59e70d3378fb54852efcbc namespace=moby
	Dec 07 20:14:35 ingress-addon-legacy-427000 dockerd[1000]: time="2023-12-07T20:14:35.252846364Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 07 20:14:39 ingress-addon-legacy-427000 dockerd[1000]: time="2023-12-07T20:14:39.280369907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Dec 07 20:14:39 ingress-addon-legacy-427000 dockerd[1000]: time="2023-12-07T20:14:39.280430407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 07 20:14:39 ingress-addon-legacy-427000 dockerd[1000]: time="2023-12-07T20:14:39.280447240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Dec 07 20:14:39 ingress-addon-legacy-427000 dockerd[1000]: time="2023-12-07T20:14:39.280460198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Dec 07 20:14:39 ingress-addon-legacy-427000 dockerd[994]: time="2023-12-07T20:14:39.324848950Z" level=info msg="ignoring event" container=9cbca563e101db49f16593a625ab0083c1739cac6eb70b5b3d9fd0d1d08be0f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:14:39 ingress-addon-legacy-427000 dockerd[1000]: time="2023-12-07T20:14:39.324996823Z" level=info msg="shim disconnected" id=9cbca563e101db49f16593a625ab0083c1739cac6eb70b5b3d9fd0d1d08be0f2 namespace=moby
	Dec 07 20:14:39 ingress-addon-legacy-427000 dockerd[1000]: time="2023-12-07T20:14:39.325024990Z" level=warning msg="cleaning up after shim disconnected" id=9cbca563e101db49f16593a625ab0083c1739cac6eb70b5b3d9fd0d1d08be0f2 namespace=moby
	Dec 07 20:14:39 ingress-addon-legacy-427000 dockerd[1000]: time="2023-12-07T20:14:39.325028990Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 07 20:14:42 ingress-addon-legacy-427000 dockerd[994]: time="2023-12-07T20:14:42.736145968Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=0896bcdd5719157cce315f7185d7103b3246cf04c1b0fc012fa17d7ab1fc9e06
	Dec 07 20:14:42 ingress-addon-legacy-427000 dockerd[994]: time="2023-12-07T20:14:42.745148598Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=0896bcdd5719157cce315f7185d7103b3246cf04c1b0fc012fa17d7ab1fc9e06
	Dec 07 20:14:42 ingress-addon-legacy-427000 dockerd[994]: time="2023-12-07T20:14:42.829659900Z" level=info msg="ignoring event" container=0896bcdd5719157cce315f7185d7103b3246cf04c1b0fc012fa17d7ab1fc9e06 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:14:42 ingress-addon-legacy-427000 dockerd[1000]: time="2023-12-07T20:14:42.830270187Z" level=info msg="shim disconnected" id=0896bcdd5719157cce315f7185d7103b3246cf04c1b0fc012fa17d7ab1fc9e06 namespace=moby
	Dec 07 20:14:42 ingress-addon-legacy-427000 dockerd[1000]: time="2023-12-07T20:14:42.830332603Z" level=warning msg="cleaning up after shim disconnected" id=0896bcdd5719157cce315f7185d7103b3246cf04c1b0fc012fa17d7ab1fc9e06 namespace=moby
	Dec 07 20:14:42 ingress-addon-legacy-427000 dockerd[1000]: time="2023-12-07T20:14:42.830343311Z" level=info msg="cleaning up dead shim" namespace=moby
	Dec 07 20:14:42 ingress-addon-legacy-427000 dockerd[994]: time="2023-12-07T20:14:42.866499120Z" level=info msg="ignoring event" container=34e10f3afd52b315a57d71804cb4e02c012977ca7346ff04ba80ecfd24ef32b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Dec 07 20:14:42 ingress-addon-legacy-427000 dockerd[1000]: time="2023-12-07T20:14:42.866615285Z" level=info msg="shim disconnected" id=34e10f3afd52b315a57d71804cb4e02c012977ca7346ff04ba80ecfd24ef32b7 namespace=moby
	Dec 07 20:14:42 ingress-addon-legacy-427000 dockerd[1000]: time="2023-12-07T20:14:42.866693368Z" level=warning msg="cleaning up after shim disconnected" id=34e10f3afd52b315a57d71804cb4e02c012977ca7346ff04ba80ecfd24ef32b7 namespace=moby
	Dec 07 20:14:42 ingress-addon-legacy-427000 dockerd[1000]: time="2023-12-07T20:14:42.866707035Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE                                      COMMAND                  CREATED              STATUS                          PORTS     NAMES
	9cbca563e101   dd1b12fcb609                               "/hello-app"             8 seconds ago        Exited (1) 8 seconds ago                  k8s_hello-world-app_hello-world-app-5f5d8b66bb-tnlp4_default_3b143afb-45ab-43be-8459-c400cbdff1d7_2
	0f9e63be9098   k8s.gcr.io/pause:3.2                       "/pause"                 28 seconds ago       Up 27 seconds                             k8s_POD_hello-world-app-5f5d8b66bb-tnlp4_default_3b143afb-45ab-43be-8459-c400cbdff1d7_0
	843291f69d9e   nginx                                      "/docker-entrypoint.…"   34 seconds ago       Up 34 seconds                             k8s_nginx_nginx_default_feb944bc-6846-4c5b-93c1-a78c1c46a161_0
	7c41871e3405   k8s.gcr.io/pause:3.2                       "/pause"                 37 seconds ago       Up 37 seconds                             k8s_POD_nginx_default_feb944bc-6846-4c5b-93c1-a78c1c46a161_0
	a643d5462740   k8s.gcr.io/pause:3.2                       "/pause"                 54 seconds ago       Exited (0) 12 seconds ago                 k8s_POD_kube-ingress-dns-minikube_kube-system_071b7c01-5d77-4b34-a5c5-17952ec3044b_0
	0896bcdd5719   registry.k8s.io/ingress-nginx/controller   "/usr/bin/dumb-init …"   55 seconds ago       Exited (137) 4 seconds ago                k8s_controller_ingress-nginx-controller-7fcf777cb7-nnj9m_ingress-nginx_4ada6121-15ef-4ae4-afc0-372e03a093b3_0
	34e10f3afd52   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Exited (0) 4 seconds ago                  k8s_POD_ingress-nginx-controller-7fcf777cb7-nnj9m_ingress-nginx_4ada6121-15ef-4ae4-afc0-372e03a093b3_0
	13492cd406aa   jettech/kube-webhook-certgen               "/kube-webhook-certg…"   About a minute ago   Exited (0) About a minute ago             k8s_patch_ingress-nginx-admission-patch-2wm78_ingress-nginx_31c1c4ae-2dfa-442f-bb3e-ac11789c1061_0
	0a56cdcfeec4   jettech/kube-webhook-certgen               "/kube-webhook-certg…"   About a minute ago   Exited (0) About a minute ago             k8s_create_ingress-nginx-admission-create-rscxs_ingress-nginx_e0e73dfa-e303-4190-a737-af136b87be49_0
	388ffbb8333f   gcr.io/k8s-minikube/storage-provisioner    "/storage-provisioner"   About a minute ago   Up About a minute                         k8s_storage-provisioner_storage-provisioner_kube-system_ec2d9924-3dd6-41c9-9623-86271b61b358_0
	854c44ddd329   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Exited (0) About a minute ago             k8s_POD_ingress-nginx-admission-patch-2wm78_ingress-nginx_31c1c4ae-2dfa-442f-bb3e-ac11789c1061_0
	201552d5e0d1   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Exited (0) About a minute ago             k8s_POD_ingress-nginx-admission-create-rscxs_ingress-nginx_e0e73dfa-e303-4190-a737-af136b87be49_0
	011185e250a0   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_storage-provisioner_kube-system_ec2d9924-3dd6-41c9-9623-86271b61b358_0
	c1266a082632   6e17ba78cf3e                               "/coredns -conf /etc…"   About a minute ago   Up About a minute                         k8s_coredns_coredns-66bff467f8-mmc7n_kube-system_00fd0b3e-acdf-426e-92cb-b441a6defb50_0
	008a784a7fd7   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_coredns-66bff467f8-mmc7n_kube-system_00fd0b3e-acdf-426e-92cb-b441a6defb50_0
	fb99db0ee8a3   565297bc6f7d                               "/usr/local/bin/kube…"   About a minute ago   Up About a minute                         k8s_kube-proxy_kube-proxy-js7lt_kube-system_7ee4a494-0777-49d4-97e4-0fa4f4e3e93c_0
	435bd5415e16   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_kube-proxy-js7lt_kube-system_7ee4a494-0777-49d4-97e4-0fa4f4e3e93c_0
	22bcd5cbd243   68a4fac29a86                               "kube-controller-man…"   About a minute ago   Up About a minute                         k8s_kube-controller-manager_kube-controller-manager-ingress-addon-legacy-427000_kube-system_b395a1e17534e69e27827b1f8d737725_0
	93610687c87c   095f37015706                               "kube-scheduler --au…"   About a minute ago   Up About a minute                         k8s_kube-scheduler_kube-scheduler-ingress-addon-legacy-427000_kube-system_d12e497b0008e22acbcd5a9cf2dd48ac_0
	5ecf2573a940   ab707b0a0ea3                               "etcd --advertise-cl…"   About a minute ago   Up About a minute                         k8s_etcd_etcd-ingress-addon-legacy-427000_kube-system_0169b7ff782cd3804ca109c92c20d00c_0
	2ec933027c4d   2694cf044d66                               "kube-apiserver --ad…"   About a minute ago   Up About a minute                         k8s_kube-apiserver_kube-apiserver-ingress-addon-legacy-427000_kube-system_36bf945afdf7c8fc8d73074b2bf4e3c3_0
	04d89b8e4e21   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_kube-scheduler-ingress-addon-legacy-427000_kube-system_d12e497b0008e22acbcd5a9cf2dd48ac_0
	aa43d7d8a54c   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_kube-controller-manager-ingress-addon-legacy-427000_kube-system_b395a1e17534e69e27827b1f8d737725_0
	348155fbe416   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_kube-apiserver-ingress-addon-legacy-427000_kube-system_36bf945afdf7c8fc8d73074b2bf4e3c3_0
	48c26f233dca   k8s.gcr.io/pause:3.2                       "/pause"                 About a minute ago   Up About a minute                         k8s_POD_etcd-ingress-addon-legacy-427000_kube-system_0169b7ff782cd3804ca109c92c20d00c_0
	time="2023-12-07T20:14:47Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
	
	* 
	* ==> coredns [c1266a082632] <==
	* [INFO] 172.17.0.1:9207 - 36059 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000025791s
	[INFO] 172.17.0.1:9207 - 48039 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000050666s
	[INFO] 172.17.0.1:9207 - 27311 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000029583s
	[INFO] 172.17.0.1:9207 - 13604 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000034041s
	[INFO] 172.17.0.1:62275 - 58482 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000028125s
	[INFO] 172.17.0.1:62275 - 15572 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000088331s
	[INFO] 172.17.0.1:62275 - 61213 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000015624s
	[INFO] 172.17.0.1:62275 - 31561 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00001225s
	[INFO] 172.17.0.1:62275 - 8570 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000012542s
	[INFO] 172.17.0.1:62275 - 29850 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000025333s
	[INFO] 172.17.0.1:62275 - 61366 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000022166s
	[INFO] 172.17.0.1:1198 - 22269 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000056749s
	[INFO] 172.17.0.1:1198 - 13968 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000018416s
	[INFO] 172.17.0.1:1198 - 42226 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000019291s
	[INFO] 172.17.0.1:1198 - 49444 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000013333s
	[INFO] 172.17.0.1:1198 - 11430 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000013333s
	[INFO] 172.17.0.1:1198 - 15870 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000012791s
	[INFO] 172.17.0.1:1198 - 44911 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000014458s
	[INFO] 172.17.0.1:2692 - 51417 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000042458s
	[INFO] 172.17.0.1:2692 - 25161 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000036333s
	[INFO] 172.17.0.1:2692 - 17303 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000012583s
	[INFO] 172.17.0.1:2692 - 61176 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000020333s
	[INFO] 172.17.0.1:2692 - 30833 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000021333s
	[INFO] 172.17.0.1:2692 - 37945 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000023624s
	[INFO] 172.17.0.1:2692 - 250 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000018666s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-427000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-427000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e9ef2cce417fa3e029706bd52eaf40ea89608b2c
	                    minikube.k8s.io/name=ingress-addon-legacy-427000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_12_07T12_13_18_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 07 Dec 2023 20:13:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-427000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 07 Dec 2023 20:14:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 07 Dec 2023 20:14:25 +0000   Thu, 07 Dec 2023 20:13:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 07 Dec 2023 20:14:25 +0000   Thu, 07 Dec 2023 20:13:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 07 Dec 2023 20:14:25 +0000   Thu, 07 Dec 2023 20:13:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 07 Dec 2023 20:14:25 +0000   Thu, 07 Dec 2023 20:13:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ingress-addon-legacy-427000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4002808Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4002808Ki
	  pods:               110
	System Info:
	  Machine ID:                 a9ecdc36c10246b587c037cea80b54a8
	  System UUID:                a9ecdc36c10246b587c037cea80b54a8
	  Boot ID:                    9b9e9956-93f9-49ec-b573-2942003f229f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.7
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-tnlp4                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  kube-system                 coredns-66bff467f8-mmc7n                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     72s
	  kube-system                 etcd-ingress-addon-legacy-427000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-apiserver-ingress-addon-legacy-427000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-427000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 kube-proxy-js7lt                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-scheduler-ingress-addon-legacy-427000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From        Message
	  ----    ------                   ----  ----        -------
	  Normal  Starting                 82s   kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  82s   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  82s   kubelet     Node ingress-addon-legacy-427000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s   kubelet     Node ingress-addon-legacy-427000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s   kubelet     Node ingress-addon-legacy-427000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                82s   kubelet     Node ingress-addon-legacy-427000 status is now: NodeReady
	  Normal  Starting                 72s   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Dec 7 20:12] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.652624] EINJ: EINJ table not found.
	[  +0.543608] systemd-fstab-generator[117]: Ignoring "noauto" for root device
	[  +4.380223] systemd-fstab-generator[487]: Ignoring "noauto" for root device
	[  +0.084772] systemd-fstab-generator[498]: Ignoring "noauto" for root device
	[  +0.431841] systemd-fstab-generator[726]: Ignoring "noauto" for root device
	[  +0.168841] systemd-fstab-generator[761]: Ignoring "noauto" for root device
	[  +0.081266] systemd-fstab-generator[772]: Ignoring "noauto" for root device
	[  +0.084902] systemd-fstab-generator[785]: Ignoring "noauto" for root device
	[  +4.315460] kauditd_printk_skb: 125 callbacks suppressed
	[  +0.036262] systemd-fstab-generator[987]: Ignoring "noauto" for root device
	[Dec 7 20:13] systemd-fstab-generator[1433]: Ignoring "noauto" for root device
	[  +8.537655] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.087817] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +5.671977] kauditd_printk_skb: 7 callbacks suppressed
	[  +0.035798] systemd-fstab-generator[2439]: Ignoring "noauto" for root device
	[ +17.425134] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.620643] kauditd_printk_skb: 17 callbacks suppressed
	[  +2.235302] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Dec 7 20:14] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [5ecf2573a940] <==
	* raft2023/12/07 20:13:13 INFO: ed054832bd1917e1 became follower at term 0
	raft2023/12/07 20:13:13 INFO: newRaft ed054832bd1917e1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/12/07 20:13:13 INFO: ed054832bd1917e1 became follower at term 1
	raft2023/12/07 20:13:13 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-12-07 20:13:13.923712 W | auth: simple token is not cryptographically signed
	2023-12-07 20:13:14.055384 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-12-07 20:13:14.068153 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-12-07 20:13:14.167382 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-12-07 20:13:14.183380 I | etcdserver: ed054832bd1917e1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-12-07 20:13:14.183448 I | embed: listening for peers on 192.168.105.6:2380
	raft2023/12/07 20:13:14 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-12-07 20:13:14.183530 I | etcdserver/membership: added member ed054832bd1917e1 [https://192.168.105.6:2380] to cluster 45a39c2c59b0edf4
	raft2023/12/07 20:13:14 INFO: ed054832bd1917e1 is starting a new election at term 1
	raft2023/12/07 20:13:14 INFO: ed054832bd1917e1 became candidate at term 2
	raft2023/12/07 20:13:14 INFO: ed054832bd1917e1 received MsgVoteResp from ed054832bd1917e1 at term 2
	raft2023/12/07 20:13:14 INFO: ed054832bd1917e1 became leader at term 2
	raft2023/12/07 20:13:14 INFO: raft.node: ed054832bd1917e1 elected leader ed054832bd1917e1 at term 2
	2023-12-07 20:13:14.716510 I | etcdserver: published {Name:ingress-addon-legacy-427000 ClientURLs:[https://192.168.105.6:2379]} to cluster 45a39c2c59b0edf4
	2023-12-07 20:13:14.716865 I | etcdserver: setting up the initial cluster version to 3.4
	2023-12-07 20:13:14.716950 I | embed: ready to serve client requests
	2023-12-07 20:13:14.717060 I | embed: ready to serve client requests
	2023-12-07 20:13:14.718319 I | embed: serving client requests on 192.168.105.6:2379
	2023-12-07 20:13:14.718768 I | embed: serving client requests on 127.0.0.1:2379
	2023-12-07 20:13:14.718880 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-12-07 20:13:14.718956 I | etcdserver/api: enabled capabilities for version 3.4
	
	* 
	* ==> kernel <==
	*  20:14:47 up 1 min,  0 users,  load average: 0.67, 0.27, 0.10
	Linux ingress-addon-legacy-427000 5.10.57 #1 SMP PREEMPT Tue Dec 5 16:07:42 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [2ec933027c4d] <==
	* E1207 20:13:16.241644       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.105.6, ResourceVersion: 0, AdditionalErrorMsg: 
	I1207 20:13:16.305392       1 cache.go:39] Caches are synced for autoregister controller
	I1207 20:13:16.305399       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1207 20:13:16.305512       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1207 20:13:16.306654       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1207 20:13:16.320187       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1207 20:13:17.203603       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1207 20:13:17.203671       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1207 20:13:17.213204       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1207 20:13:17.219912       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1207 20:13:17.219948       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1207 20:13:17.363838       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1207 20:13:17.373669       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1207 20:13:17.514823       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.105.6]
	I1207 20:13:17.515161       1 controller.go:609] quota admission added evaluator for: endpoints
	I1207 20:13:17.516229       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1207 20:13:18.524594       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1207 20:13:18.764017       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1207 20:13:18.949086       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1207 20:13:25.155001       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1207 20:13:35.386081       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1207 20:13:35.862619       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1207 20:13:37.941567       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1207 20:14:09.917467       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E1207 20:14:40.736754       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [22bcd5cbd243] <==
	* I1207 20:13:35.635905       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I1207 20:13:35.636825       1 shared_informer.go:230] Caches are synced for bootstrap_signer 
	I1207 20:13:35.657006       1 shared_informer.go:230] Caches are synced for endpoint 
	I1207 20:13:35.786825       1 shared_informer.go:230] Caches are synced for resource quota 
	I1207 20:13:35.834549       1 shared_informer.go:230] Caches are synced for job 
	I1207 20:13:35.835393       1 shared_informer.go:230] Caches are synced for resource quota 
	I1207 20:13:35.836918       1 shared_informer.go:230] Caches are synced for disruption 
	I1207 20:13:35.836928       1 disruption.go:339] Sending events to api server.
	I1207 20:13:35.839790       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1207 20:13:35.839820       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1207 20:13:35.846442       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1207 20:13:35.861121       1 shared_informer.go:230] Caches are synced for deployment 
	I1207 20:13:35.865205       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"ee0314ba-3fb4-4ae9-ba71-6c457588a831", APIVersion:"apps/v1", ResourceVersion:"319", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I1207 20:13:35.867923       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"3ee397d6-485d-445b-bba2-7017627be954", APIVersion:"apps/v1", ResourceVersion:"328", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-mmc7n
	I1207 20:13:35.871130       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
	E1207 20:13:35.900519       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I1207 20:13:37.937221       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"7edcee45-4c5d-4170-86c9-c8f92245e4c2", APIVersion:"apps/v1", ResourceVersion:"401", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1207 20:13:37.946418       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"1e89cc4a-ee56-45fd-bc94-e31e120c4c9a", APIVersion:"apps/v1", ResourceVersion:"402", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-nnj9m
	I1207 20:13:37.948168       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"b6c40f52-677d-49f4-b486-5972d799296c", APIVersion:"batch/v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-rscxs
	I1207 20:13:37.975967       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"43455017-83ed-4705-9fc0-923be8578921", APIVersion:"batch/v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-2wm78
	I1207 20:13:41.508145       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"b6c40f52-677d-49f4-b486-5972d799296c", APIVersion:"batch/v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1207 20:13:42.542858       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"43455017-83ed-4705-9fc0-923be8578921", APIVersion:"batch/v1", ResourceVersion:"427", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1207 20:14:19.212396       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"2979827a-c7eb-4b41-bbbb-4314d94cfb41", APIVersion:"apps/v1", ResourceVersion:"554", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1207 20:14:19.221975       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"73fe9b37-b30c-4c3a-89b4-f7933b5c2d17", APIVersion:"apps/v1", ResourceVersion:"555", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-tnlp4
	E1207 20:14:45.467583       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-4vlr7" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [fb99db0ee8a3] <==
	* W1207 20:13:35.949562       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1207 20:13:35.954540       1 node.go:136] Successfully retrieved node IP: 192.168.105.6
	I1207 20:13:35.954561       1 server_others.go:186] Using iptables Proxier.
	I1207 20:13:35.954688       1 server.go:583] Version: v1.18.20
	I1207 20:13:35.958905       1 config.go:315] Starting service config controller
	I1207 20:13:35.958975       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1207 20:13:35.959186       1 config.go:133] Starting endpoints config controller
	I1207 20:13:35.959195       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1207 20:13:36.059102       1 shared_informer.go:230] Caches are synced for service config 
	I1207 20:13:36.059403       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [93610687c87c] <==
	* I1207 20:13:16.267881       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1207 20:13:16.267998       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I1207 20:13:16.268927       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1207 20:13:16.268975       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1207 20:13:16.269394       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1207 20:13:16.269569       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1207 20:13:16.269689       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1207 20:13:16.272368       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1207 20:13:16.272391       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1207 20:13:16.272414       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1207 20:13:16.272438       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1207 20:13:16.272460       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1207 20:13:16.272481       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1207 20:13:16.272501       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1207 20:13:16.272521       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1207 20:13:16.272541       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1207 20:13:16.272561       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1207 20:13:16.272581       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1207 20:13:17.085749       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1207 20:13:17.168110       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1207 20:13:17.168596       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1207 20:13:17.168809       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1207 20:13:17.204267       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1207 20:13:17.246466       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1207 20:13:19.469212       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-12-07 20:12:51 UTC, ends at Thu 2023-12-07 20:14:47 UTC. --
	Dec 07 20:14:25 ingress-addon-legacy-427000 kubelet[2445]: I1207 20:14:25.180215    2445 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 5755716ccf09652659174cd3dc3a92e9a3f68b916e8dbbbb9410ea0a3413d4d0
	Dec 07 20:14:25 ingress-addon-legacy-427000 kubelet[2445]: E1207 20:14:25.180638    2445 pod_workers.go:191] Error syncing pod 3b143afb-45ab-43be-8459-c400cbdff1d7 ("hello-world-app-5f5d8b66bb-tnlp4_default(3b143afb-45ab-43be-8459-c400cbdff1d7)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-tnlp4_default(3b143afb-45ab-43be-8459-c400cbdff1d7)"
	Dec 07 20:14:28 ingress-addon-legacy-427000 kubelet[2445]: I1207 20:14:28.197276    2445 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: caa15832c80e152a299c5f9cb12264b7162fdf635319e65818240fa09580bf78
	Dec 07 20:14:28 ingress-addon-legacy-427000 kubelet[2445]: E1207 20:14:28.198163    2445 pod_workers.go:191] Error syncing pod 071b7c01-5d77-4b34-a5c5-17952ec3044b ("kube-ingress-dns-minikube_kube-system(071b7c01-5d77-4b34-a5c5-17952ec3044b)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(071b7c01-5d77-4b34-a5c5-17952ec3044b)"
	Dec 07 20:14:34 ingress-addon-legacy-427000 kubelet[2445]: I1207 20:14:34.533445    2445 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-zj4zd" (UniqueName: "kubernetes.io/secret/071b7c01-5d77-4b34-a5c5-17952ec3044b-minikube-ingress-dns-token-zj4zd") pod "071b7c01-5d77-4b34-a5c5-17952ec3044b" (UID: "071b7c01-5d77-4b34-a5c5-17952ec3044b")
	Dec 07 20:14:34 ingress-addon-legacy-427000 kubelet[2445]: I1207 20:14:34.536434    2445 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/071b7c01-5d77-4b34-a5c5-17952ec3044b-minikube-ingress-dns-token-zj4zd" (OuterVolumeSpecName: "minikube-ingress-dns-token-zj4zd") pod "071b7c01-5d77-4b34-a5c5-17952ec3044b" (UID: "071b7c01-5d77-4b34-a5c5-17952ec3044b"). InnerVolumeSpecName "minikube-ingress-dns-token-zj4zd". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 07 20:14:34 ingress-addon-legacy-427000 kubelet[2445]: I1207 20:14:34.638487    2445 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-zj4zd" (UniqueName: "kubernetes.io/secret/071b7c01-5d77-4b34-a5c5-17952ec3044b-minikube-ingress-dns-token-zj4zd") on node "ingress-addon-legacy-427000" DevicePath ""
	Dec 07 20:14:35 ingress-addon-legacy-427000 kubelet[2445]: I1207 20:14:35.317277    2445 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: caa15832c80e152a299c5f9cb12264b7162fdf635319e65818240fa09580bf78
	Dec 07 20:14:39 ingress-addon-legacy-427000 kubelet[2445]: I1207 20:14:39.197437    2445 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 5755716ccf09652659174cd3dc3a92e9a3f68b916e8dbbbb9410ea0a3413d4d0
	Dec 07 20:14:39 ingress-addon-legacy-427000 kubelet[2445]: W1207 20:14:39.335070    2445 container.go:412] Failed to create summary reader for "/kubepods/besteffort/pod3b143afb-45ab-43be-8459-c400cbdff1d7/9cbca563e101db49f16593a625ab0083c1739cac6eb70b5b3d9fd0d1d08be0f2": none of the resources are being tracked.
	Dec 07 20:14:39 ingress-addon-legacy-427000 kubelet[2445]: W1207 20:14:39.385776    2445 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-tnlp4 through plugin: invalid network status for
	Dec 07 20:14:39 ingress-addon-legacy-427000 kubelet[2445]: I1207 20:14:39.388072    2445 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 5755716ccf09652659174cd3dc3a92e9a3f68b916e8dbbbb9410ea0a3413d4d0
	Dec 07 20:14:39 ingress-addon-legacy-427000 kubelet[2445]: I1207 20:14:39.388209    2445 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 9cbca563e101db49f16593a625ab0083c1739cac6eb70b5b3d9fd0d1d08be0f2
	Dec 07 20:14:39 ingress-addon-legacy-427000 kubelet[2445]: E1207 20:14:39.388325    2445 pod_workers.go:191] Error syncing pod 3b143afb-45ab-43be-8459-c400cbdff1d7 ("hello-world-app-5f5d8b66bb-tnlp4_default(3b143afb-45ab-43be-8459-c400cbdff1d7)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-tnlp4_default(3b143afb-45ab-43be-8459-c400cbdff1d7)"
	Dec 07 20:14:40 ingress-addon-legacy-427000 kubelet[2445]: W1207 20:14:40.391505    2445 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-tnlp4 through plugin: invalid network status for
	Dec 07 20:14:40 ingress-addon-legacy-427000 kubelet[2445]: E1207 20:14:40.727624    2445 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-nnj9m.179ea5ddae7d8a81", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-nnj9m", UID:"4ada6121-15ef-4ae4-afc0-372e03a093b3", APIVersion:"v1", ResourceVersion:"409", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-427000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc154a7cc2b4caa81, ext:81990293869, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc154a7cc2b4caa81, ext:81990293869, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-nnj9m.179ea5ddae7d8a81" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 07 20:14:40 ingress-addon-legacy-427000 kubelet[2445]: E1207 20:14:40.733390    2445 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-nnj9m.179ea5ddae7d8a81", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-nnj9m", UID:"4ada6121-15ef-4ae4-afc0-372e03a093b3", APIVersion:"v1", ResourceVersion:"409", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-427000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc154a7cc2b4caa81, ext:81990293869, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc154a7cc2b8f7236, ext:81994670413, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-nnj9m.179ea5ddae7d8a81" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Dec 07 20:14:43 ingress-addon-legacy-427000 kubelet[2445]: W1207 20:14:43.444492    2445 pod_container_deletor.go:77] Container "34e10f3afd52b315a57d71804cb4e02c012977ca7346ff04ba80ecfd24ef32b7" not found in pod's containers
	Dec 07 20:14:44 ingress-addon-legacy-427000 kubelet[2445]: I1207 20:14:44.940149    2445 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/4ada6121-15ef-4ae4-afc0-372e03a093b3-webhook-cert") pod "4ada6121-15ef-4ae4-afc0-372e03a093b3" (UID: "4ada6121-15ef-4ae4-afc0-372e03a093b3")
	Dec 07 20:14:44 ingress-addon-legacy-427000 kubelet[2445]: I1207 20:14:44.940289    2445 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-px6bk" (UniqueName: "kubernetes.io/secret/4ada6121-15ef-4ae4-afc0-372e03a093b3-ingress-nginx-token-px6bk") pod "4ada6121-15ef-4ae4-afc0-372e03a093b3" (UID: "4ada6121-15ef-4ae4-afc0-372e03a093b3")
	Dec 07 20:14:44 ingress-addon-legacy-427000 kubelet[2445]: I1207 20:14:44.952615    2445 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ada6121-15ef-4ae4-afc0-372e03a093b3-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "4ada6121-15ef-4ae4-afc0-372e03a093b3" (UID: "4ada6121-15ef-4ae4-afc0-372e03a093b3"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 07 20:14:44 ingress-addon-legacy-427000 kubelet[2445]: I1207 20:14:44.952858    2445 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ada6121-15ef-4ae4-afc0-372e03a093b3-ingress-nginx-token-px6bk" (OuterVolumeSpecName: "ingress-nginx-token-px6bk") pod "4ada6121-15ef-4ae4-afc0-372e03a093b3" (UID: "4ada6121-15ef-4ae4-afc0-372e03a093b3"). InnerVolumeSpecName "ingress-nginx-token-px6bk". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Dec 07 20:14:45 ingress-addon-legacy-427000 kubelet[2445]: I1207 20:14:45.042659    2445 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/4ada6121-15ef-4ae4-afc0-372e03a093b3-webhook-cert") on node "ingress-addon-legacy-427000" DevicePath ""
	Dec 07 20:14:45 ingress-addon-legacy-427000 kubelet[2445]: I1207 20:14:45.042790    2445 reconciler.go:319] Volume detached for volume "ingress-nginx-token-px6bk" (UniqueName: "kubernetes.io/secret/4ada6121-15ef-4ae4-afc0-372e03a093b3-ingress-nginx-token-px6bk") on node "ingress-addon-legacy-427000" DevicePath ""
	Dec 07 20:14:45 ingress-addon-legacy-427000 kubelet[2445]: W1207 20:14:45.230565    2445 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/4ada6121-15ef-4ae4-afc0-372e03a093b3/volumes" does not exist
	
	* 
	* ==> storage-provisioner [388ffbb8333f] <==
	* I1207 20:13:39.300300       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1207 20:13:39.304561       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1207 20:13:39.304576       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1207 20:13:39.307575       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1207 20:13:39.307750       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-427000_2d9a1bf8-b149-4798-9bf1-c3a0bf4b3e73!
	I1207 20:13:39.309769       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3f4a7d52-5803-4828-87b5-fcadb2e40cbe", APIVersion:"v1", ResourceVersion:"441", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-427000_2d9a1bf8-b149-4798-9bf1-c3a0bf4b3e73 became leader
	I1207 20:13:39.408304       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-427000_2d9a1bf8-b149-4798-9bf1-c3a0bf4b3e73!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-427000 -n ingress-addon-legacy-427000
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-427000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (54.85s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.22s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-644000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-644000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.144806958s)

                                                
                                                
-- stdout --
	* [mount-start-1-644000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-644000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-644000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-644000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-644000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-644000 -n mount-start-1-644000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-644000 -n mount-start-1-644000: exit status 7 (70.464875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-644000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.22s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-554000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:86: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-554000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.748234166s)

                                                
                                                
-- stdout --
	* [multinode-554000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-554000 in cluster multinode-554000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-554000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:17:00.361707    3262 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:17:00.361871    3262 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:17:00.361874    3262 out.go:309] Setting ErrFile to fd 2...
	I1207 12:17:00.361877    3262 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:17:00.362006    3262 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:17:00.363083    3262 out.go:303] Setting JSON to false
	I1207 12:17:00.378993    3262 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2791,"bootTime":1701977429,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:17:00.379076    3262 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:17:00.387614    3262 out.go:177] * [multinode-554000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:17:00.391539    3262 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:17:00.391591    3262 notify.go:220] Checking for updates...
	I1207 12:17:00.396800    3262 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:17:00.399584    3262 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:17:00.402581    3262 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:17:00.405614    3262 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:17:00.408574    3262 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:17:00.411790    3262 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:17:00.415602    3262 out.go:177] * Using the qemu2 driver based on user configuration
	I1207 12:17:00.422544    3262 start.go:298] selected driver: qemu2
	I1207 12:17:00.422551    3262 start.go:902] validating driver "qemu2" against <nil>
	I1207 12:17:00.422556    3262 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:17:00.424728    3262 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 12:17:00.427628    3262 out.go:177] * Automatically selected the socket_vmnet network
	I1207 12:17:00.430645    3262 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 12:17:00.430704    3262 cni.go:84] Creating CNI manager for ""
	I1207 12:17:00.430710    3262 cni.go:136] 0 nodes found, recommending kindnet
	I1207 12:17:00.430718    3262 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1207 12:17:00.430724    3262 start_flags.go:323] config:
	{Name:multinode-554000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-554000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAge
ntPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:17:00.435260    3262 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:17:00.442379    3262 out.go:177] * Starting control plane node multinode-554000 in cluster multinode-554000
	I1207 12:17:00.446548    3262 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 12:17:00.446572    3262 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1207 12:17:00.446581    3262 cache.go:56] Caching tarball of preloaded images
	I1207 12:17:00.446642    3262 preload.go:174] Found /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 12:17:00.446647    3262 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1207 12:17:00.446831    3262 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/multinode-554000/config.json ...
	I1207 12:17:00.446842    3262 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/multinode-554000/config.json: {Name:mkabee001781d1aba15753f6dfa2f578bc11a831 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:17:00.447059    3262 start.go:365] acquiring machines lock for multinode-554000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:17:00.447090    3262 start.go:369] acquired machines lock for "multinode-554000" in 25.75µs
	I1207 12:17:00.447102    3262 start.go:93] Provisioning new machine with config: &{Name:multinode-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-554000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:17:00.447133    3262 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:17:00.455595    3262 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1207 12:17:00.472222    3262 start.go:159] libmachine.API.Create for "multinode-554000" (driver="qemu2")
	I1207 12:17:00.472250    3262 client.go:168] LocalClient.Create starting
	I1207 12:17:00.472310    3262 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:17:00.472339    3262 main.go:141] libmachine: Decoding PEM data...
	I1207 12:17:00.472349    3262 main.go:141] libmachine: Parsing certificate...
	I1207 12:17:00.472388    3262 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:17:00.472411    3262 main.go:141] libmachine: Decoding PEM data...
	I1207 12:17:00.472417    3262 main.go:141] libmachine: Parsing certificate...
	I1207 12:17:00.472796    3262 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:17:00.593433    3262 main.go:141] libmachine: Creating SSH key...
	I1207 12:17:00.703766    3262 main.go:141] libmachine: Creating Disk image...
	I1207 12:17:00.703772    3262 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:17:00.703933    3262 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/disk.qcow2
	I1207 12:17:00.716061    3262 main.go:141] libmachine: STDOUT: 
	I1207 12:17:00.716089    3262 main.go:141] libmachine: STDERR: 
	I1207 12:17:00.716154    3262 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/disk.qcow2 +20000M
	I1207 12:17:00.726895    3262 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:17:00.726916    3262 main.go:141] libmachine: STDERR: 
	I1207 12:17:00.726940    3262 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/disk.qcow2
	I1207 12:17:00.726944    3262 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:17:00.726983    3262 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:3f:50:df:b5:b9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/disk.qcow2
	I1207 12:17:00.728748    3262 main.go:141] libmachine: STDOUT: 
	I1207 12:17:00.728766    3262 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:17:00.728785    3262 client.go:171] LocalClient.Create took 256.536083ms
	I1207 12:17:02.730945    3262 start.go:128] duration metric: createHost completed in 2.283836042s
	I1207 12:17:02.731016    3262 start.go:83] releasing machines lock for "multinode-554000", held for 2.283974125s
	W1207 12:17:02.731126    3262 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:17:02.741331    3262 out.go:177] * Deleting "multinode-554000" in qemu2 ...
	W1207 12:17:02.763613    3262 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:17:02.763642    3262 start.go:709] Will try again in 5 seconds ...
	I1207 12:17:07.765798    3262 start.go:365] acquiring machines lock for multinode-554000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:17:07.766250    3262 start.go:369] acquired machines lock for "multinode-554000" in 297.333µs
	I1207 12:17:07.766369    3262 start.go:93] Provisioning new machine with config: &{Name:multinode-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-554000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:17:07.766710    3262 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:17:07.777301    3262 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1207 12:17:07.826221    3262 start.go:159] libmachine.API.Create for "multinode-554000" (driver="qemu2")
	I1207 12:17:07.826280    3262 client.go:168] LocalClient.Create starting
	I1207 12:17:07.826413    3262 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:17:07.826479    3262 main.go:141] libmachine: Decoding PEM data...
	I1207 12:17:07.826494    3262 main.go:141] libmachine: Parsing certificate...
	I1207 12:17:07.826550    3262 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:17:07.826593    3262 main.go:141] libmachine: Decoding PEM data...
	I1207 12:17:07.826606    3262 main.go:141] libmachine: Parsing certificate...
	I1207 12:17:07.827155    3262 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:17:07.959510    3262 main.go:141] libmachine: Creating SSH key...
	I1207 12:17:08.010478    3262 main.go:141] libmachine: Creating Disk image...
	I1207 12:17:08.010484    3262 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:17:08.010657    3262 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/disk.qcow2
	I1207 12:17:08.022579    3262 main.go:141] libmachine: STDOUT: 
	I1207 12:17:08.022597    3262 main.go:141] libmachine: STDERR: 
	I1207 12:17:08.022645    3262 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/disk.qcow2 +20000M
	I1207 12:17:08.032881    3262 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:17:08.032897    3262 main.go:141] libmachine: STDERR: 
	I1207 12:17:08.032916    3262 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/disk.qcow2
	I1207 12:17:08.032922    3262 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:17:08.032961    3262 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:58:8c:19:bd:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/disk.qcow2
	I1207 12:17:08.034608    3262 main.go:141] libmachine: STDOUT: 
	I1207 12:17:08.034625    3262 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:17:08.034642    3262 client.go:171] LocalClient.Create took 208.361042ms
	I1207 12:17:10.036773    3262 start.go:128] duration metric: createHost completed in 2.270090875s
	I1207 12:17:10.036821    3262 start.go:83] releasing machines lock for "multinode-554000", held for 2.27060475s
	W1207 12:17:10.037237    3262 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-554000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-554000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:17:10.046839    3262 out.go:177] 
	W1207 12:17:10.051726    3262 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:17:10.051763    3262 out.go:239] * 
	* 
	W1207 12:17:10.062866    3262 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:17:10.065822    3262 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:88: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-554000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-554000 -n multinode-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-554000 -n multinode-554000: exit status 7 (56.10425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-554000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (88.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-554000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:509: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-554000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (119.449584ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-554000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:511: failed to create busybox deployment to multinode cluster
multinode_test.go:514: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-554000 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-554000 -- rollout status deployment/busybox: exit status 1 (60.796458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-554000"

                                                
                                                
** /stderr **
multinode_test.go:516: failed to deploy busybox to multinode cluster
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-554000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-554000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (59.002ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-554000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-554000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-554000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.028541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-554000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-554000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-554000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.225541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-554000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
E1207 12:17:14.785432    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/functional-469000/client.crt: no such file or directory
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-554000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-554000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.33725ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-554000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-554000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-554000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.852625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-554000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-554000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-554000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.099583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-554000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-554000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-554000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.449916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-554000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-554000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-554000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.137292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-554000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-554000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-554000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.404834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-554000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
E1207 12:18:36.724026    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/functional-469000/client.crt: no such file or directory
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-554000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-554000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.618417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-554000"

                                                
                                                
** /stderr **
multinode_test.go:524: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:540: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:544: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-554000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:544: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-554000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.688208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-554000"

                                                
                                                
** /stderr **
multinode_test.go:546: failed get Pod names
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-554000 -- exec  -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-554000 -- exec  -- nslookup kubernetes.io: exit status 1 (56.987708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-554000"

                                                
                                                
** /stderr **
multinode_test.go:554: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:562: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-554000 -- exec  -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-554000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.909917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-554000"

                                                
                                                
** /stderr **
multinode_test.go:564: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:570: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-554000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-554000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.286667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-554000"

                                                
                                                
** /stderr **
multinode_test.go:572: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-554000 -n multinode-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-554000 -n multinode-554000: exit status 7 (31.388542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-554000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (88.75s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-554000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:580: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-554000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.172583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-554000"

                                                
                                                
** /stderr **
multinode_test.go:582: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-554000 -n multinode-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-554000 -n multinode-554000: exit status 7 (31.902667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-554000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-554000 -v 3 --alsologtostderr
multinode_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-554000 -v 3 --alsologtostderr: exit status 89 (45.237291ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-554000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:18:39.022406    3392 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:18:39.022641    3392 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:18:39.022645    3392 out.go:309] Setting ErrFile to fd 2...
	I1207 12:18:39.022647    3392 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:18:39.022772    3392 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:18:39.023003    3392 mustload.go:65] Loading cluster: multinode-554000
	I1207 12:18:39.023195    3392 config.go:182] Loaded profile config "multinode-554000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:18:39.028453    3392 out.go:177] * The control plane node must be running for this command
	I1207 12:18:39.033425    3392 out.go:177]   To start a cluster, run: "minikube start -p multinode-554000"

                                                
                                                
** /stderr **
multinode_test.go:113: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-554000 -v 3 --alsologtostderr" : exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-554000 -n multinode-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-554000 -n multinode-554000: exit status 7 (31.767208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-554000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-554000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:211: (dbg) Non-zero exit: kubectl --context multinode-554000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (29.90575ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-554000

                                                
                                                
** /stderr **
multinode_test.go:213: failed to 'kubectl get nodes' with args "kubectl --context multinode-554000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:220: failed to decode json from label list: args "kubectl --context multinode-554000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-554000 -n multinode-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-554000 -n multinode-554000: exit status 7 (31.922208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-554000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:156: expected profile "multinode-554000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-554000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-554000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidd
en\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"multinode-554000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\
",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPat
h\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"AutoPauseInterval\":60000000000,\"GPUs\":\"\"},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-554000 -n multinode-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-554000 -n multinode-554000: exit status 7 (31.566375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-554000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-554000 status --output json --alsologtostderr
multinode_test.go:174: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-554000 status --output json --alsologtostderr: exit status 7 (31.732833ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-554000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:18:39.266634    3406 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:18:39.266830    3406 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:18:39.266833    3406 out.go:309] Setting ErrFile to fd 2...
	I1207 12:18:39.266836    3406 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:18:39.266977    3406 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:18:39.267101    3406 out.go:303] Setting JSON to true
	I1207 12:18:39.267118    3406 mustload.go:65] Loading cluster: multinode-554000
	I1207 12:18:39.267170    3406 notify.go:220] Checking for updates...
	I1207 12:18:39.267330    3406 config.go:182] Loaded profile config "multinode-554000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:18:39.267335    3406 status.go:255] checking status of multinode-554000 ...
	I1207 12:18:39.267538    3406 status.go:330] multinode-554000 host status = "Stopped" (err=<nil>)
	I1207 12:18:39.267542    3406 status.go:343] host is not running, skipping remaining checks
	I1207 12:18:39.267544    3406 status.go:257] multinode-554000 status: &{Name:multinode-554000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:181: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-554000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-554000 -n multinode-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-554000 -n multinode-554000: exit status 7 (30.413542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-554000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-554000 node stop m03
multinode_test.go:238: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-554000 node stop m03: exit status 85 (49.082292ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:240: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-554000 node stop m03": exit status 85
multinode_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-554000 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-554000 status: exit status 7 (31.914541ms)

                                                
                                                
-- stdout --
	multinode-554000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-554000 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-554000 status --alsologtostderr: exit status 7 (31.820208ms)

                                                
                                                
-- stdout --
	multinode-554000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:18:39.410719    3414 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:18:39.410909    3414 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:18:39.410912    3414 out.go:309] Setting ErrFile to fd 2...
	I1207 12:18:39.410915    3414 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:18:39.411048    3414 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:18:39.411168    3414 out.go:303] Setting JSON to false
	I1207 12:18:39.411180    3414 mustload.go:65] Loading cluster: multinode-554000
	I1207 12:18:39.411244    3414 notify.go:220] Checking for updates...
	I1207 12:18:39.411391    3414 config.go:182] Loaded profile config "multinode-554000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:18:39.411396    3414 status.go:255] checking status of multinode-554000 ...
	I1207 12:18:39.411616    3414 status.go:330] multinode-554000 host status = "Stopped" (err=<nil>)
	I1207 12:18:39.411621    3414 status.go:343] host is not running, skipping remaining checks
	I1207 12:18:39.411623    3414 status.go:257] multinode-554000 status: &{Name:multinode-554000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:257: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-554000 status --alsologtostderr": multinode-554000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-554000 -n multinode-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-554000 -n multinode-554000: exit status 7 (32.131917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-554000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-554000 node start m03 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-554000 node start m03 --alsologtostderr: exit status 85 (45.530208ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:18:39.474746    3418 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:18:39.474960    3418 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:18:39.474963    3418 out.go:309] Setting ErrFile to fd 2...
	I1207 12:18:39.474965    3418 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:18:39.475083    3418 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:18:39.475322    3418 mustload.go:65] Loading cluster: multinode-554000
	I1207 12:18:39.475500    3418 config.go:182] Loaded profile config "multinode-554000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:18:39.479634    3418 out.go:177] 
	W1207 12:18:39.482733    3418 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1207 12:18:39.482737    3418 out.go:239] * 
	* 
	W1207 12:18:39.484173    3418 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:18:39.485522    3418 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1207 12:18:39.474746    3418 out.go:296] Setting OutFile to fd 1 ...
I1207 12:18:39.474960    3418 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 12:18:39.474963    3418 out.go:309] Setting ErrFile to fd 2...
I1207 12:18:39.474965    3418 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 12:18:39.475083    3418 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
I1207 12:18:39.475322    3418 mustload.go:65] Loading cluster: multinode-554000
I1207 12:18:39.475500    3418 config.go:182] Loaded profile config "multinode-554000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1207 12:18:39.479634    3418 out.go:177] 
W1207 12:18:39.482733    3418 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1207 12:18:39.482737    3418 out.go:239] * 
* 
W1207 12:18:39.484173    3418 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1207 12:18:39.485522    3418 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-554000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:289: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-554000 status
multinode_test.go:289: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-554000 status: exit status 7 (30.958167ms)

                                                
                                                
-- stdout --
	multinode-554000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:291: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-554000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-554000 -n multinode-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-554000 -n multinode-554000: exit status 7 (31.839791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-554000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (5.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-554000
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-554000
multinode_test.go:323: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-554000 --wait=true -v=8 --alsologtostderr
multinode_test.go:323: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-554000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.188940166s)

                                                
                                                
-- stdout --
	* [multinode-554000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-554000 in cluster multinode-554000
	* Restarting existing qemu2 VM for "multinode-554000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-554000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:18:39.676099    3428 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:18:39.676239    3428 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:18:39.676243    3428 out.go:309] Setting ErrFile to fd 2...
	I1207 12:18:39.676245    3428 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:18:39.676374    3428 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:18:39.677348    3428 out.go:303] Setting JSON to false
	I1207 12:18:39.693160    3428 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2890,"bootTime":1701977429,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:18:39.693241    3428 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:18:39.697667    3428 out.go:177] * [multinode-554000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:18:39.707831    3428 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:18:39.712465    3428 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:18:39.707858    3428 notify.go:220] Checking for updates...
	I1207 12:18:39.718712    3428 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:18:39.721651    3428 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:18:39.724617    3428 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:18:39.727650    3428 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:18:39.730934    3428 config.go:182] Loaded profile config "multinode-554000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:18:39.730978    3428 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:18:39.735605    3428 out.go:177] * Using the qemu2 driver based on existing profile
	I1207 12:18:39.742694    3428 start.go:298] selected driver: qemu2
	I1207 12:18:39.742702    3428 start.go:902] validating driver "qemu2" against &{Name:multinode-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:multinode-554000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:18:39.742774    3428 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:18:39.745080    3428 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 12:18:39.745127    3428 cni.go:84] Creating CNI manager for ""
	I1207 12:18:39.745131    3428 cni.go:136] 1 nodes found, recommending kindnet
	I1207 12:18:39.745147    3428 start_flags.go:323] config:
	{Name:multinode-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-554000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:18:39.749531    3428 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:18:39.756599    3428 out.go:177] * Starting control plane node multinode-554000 in cluster multinode-554000
	I1207 12:18:39.759507    3428 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 12:18:39.759523    3428 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1207 12:18:39.759533    3428 cache.go:56] Caching tarball of preloaded images
	I1207 12:18:39.759595    3428 preload.go:174] Found /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 12:18:39.759604    3428 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1207 12:18:39.759680    3428 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/multinode-554000/config.json ...
	I1207 12:18:39.760034    3428 start.go:365] acquiring machines lock for multinode-554000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:18:39.760067    3428 start.go:369] acquired machines lock for "multinode-554000" in 26.334µs
	I1207 12:18:39.760075    3428 start.go:96] Skipping create...Using existing machine configuration
	I1207 12:18:39.760080    3428 fix.go:54] fixHost starting: 
	I1207 12:18:39.760194    3428 fix.go:102] recreateIfNeeded on multinode-554000: state=Stopped err=<nil>
	W1207 12:18:39.760202    3428 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 12:18:39.768491    3428 out.go:177] * Restarting existing qemu2 VM for "multinode-554000" ...
	I1207 12:18:39.772550    3428 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:58:8c:19:bd:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/disk.qcow2
	I1207 12:18:39.774663    3428 main.go:141] libmachine: STDOUT: 
	I1207 12:18:39.774688    3428 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:18:39.774719    3428 fix.go:56] fixHost completed within 14.636959ms
	I1207 12:18:39.774723    3428 start.go:83] releasing machines lock for "multinode-554000", held for 14.6525ms
	W1207 12:18:39.774730    3428 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:18:39.774769    3428 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:18:39.774774    3428 start.go:709] Will try again in 5 seconds ...
	I1207 12:18:44.776857    3428 start.go:365] acquiring machines lock for multinode-554000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:18:44.777214    3428 start.go:369] acquired machines lock for "multinode-554000" in 268.958µs
	I1207 12:18:44.777350    3428 start.go:96] Skipping create...Using existing machine configuration
	I1207 12:18:44.777369    3428 fix.go:54] fixHost starting: 
	I1207 12:18:44.778036    3428 fix.go:102] recreateIfNeeded on multinode-554000: state=Stopped err=<nil>
	W1207 12:18:44.778062    3428 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 12:18:44.785429    3428 out.go:177] * Restarting existing qemu2 VM for "multinode-554000" ...
	I1207 12:18:44.789674    3428 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:58:8c:19:bd:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/disk.qcow2
	I1207 12:18:44.798900    3428 main.go:141] libmachine: STDOUT: 
	I1207 12:18:44.798979    3428 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:18:44.799051    3428 fix.go:56] fixHost completed within 21.681916ms
	I1207 12:18:44.799069    3428 start.go:83] releasing machines lock for "multinode-554000", held for 21.834041ms
	W1207 12:18:44.799287    3428 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-554000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-554000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:18:44.806471    3428 out.go:177] 
	W1207 12:18:44.810627    3428 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:18:44.810668    3428 out.go:239] * 
	* 
	W1207 12:18:44.813817    3428 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:18:44.821417    3428 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:325: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-554000" : exit status 80
multinode_test.go:328: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-554000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-554000 -n multinode-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-554000 -n multinode-554000: exit status 7 (33.451916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-554000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (5.38s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-554000 node delete m03
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-554000 node delete m03: exit status 89 (42.863084ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-554000"

                                                
                                                
-- /stdout --
multinode_test.go:424: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-554000 node delete m03": exit status 89
multinode_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-554000 status --alsologtostderr
multinode_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-554000 status --alsologtostderr: exit status 7 (31.990833ms)

                                                
                                                
-- stdout --
	multinode-554000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:18:45.011644    3442 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:18:45.011847    3442 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:18:45.011850    3442 out.go:309] Setting ErrFile to fd 2...
	I1207 12:18:45.011853    3442 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:18:45.011964    3442 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:18:45.012077    3442 out.go:303] Setting JSON to false
	I1207 12:18:45.012090    3442 mustload.go:65] Loading cluster: multinode-554000
	I1207 12:18:45.012151    3442 notify.go:220] Checking for updates...
	I1207 12:18:45.012274    3442 config.go:182] Loaded profile config "multinode-554000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:18:45.012278    3442 status.go:255] checking status of multinode-554000 ...
	I1207 12:18:45.012478    3442 status.go:330] multinode-554000 host status = "Stopped" (err=<nil>)
	I1207 12:18:45.012481    3442 status.go:343] host is not running, skipping remaining checks
	I1207 12:18:45.012483    3442 status.go:257] multinode-554000 status: &{Name:multinode-554000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:430: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-554000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-554000 -n multinode-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-554000 -n multinode-554000: exit status 7 (32.037084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-554000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (0.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-554000 stop
multinode_test.go:348: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-554000 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-554000 status: exit status 7 (32.340083ms)

                                                
                                                
-- stdout --
	multinode-554000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-554000 status --alsologtostderr
multinode_test.go:355: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-554000 status --alsologtostderr: exit status 7 (31.720875ms)

                                                
                                                
-- stdout --
	multinode-554000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:18:45.170760    3450 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:18:45.170929    3450 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:18:45.170932    3450 out.go:309] Setting ErrFile to fd 2...
	I1207 12:18:45.170935    3450 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:18:45.171062    3450 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:18:45.171179    3450 out.go:303] Setting JSON to false
	I1207 12:18:45.171191    3450 mustload.go:65] Loading cluster: multinode-554000
	I1207 12:18:45.171249    3450 notify.go:220] Checking for updates...
	I1207 12:18:45.171378    3450 config.go:182] Loaded profile config "multinode-554000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:18:45.171382    3450 status.go:255] checking status of multinode-554000 ...
	I1207 12:18:45.171607    3450 status.go:330] multinode-554000 host status = "Stopped" (err=<nil>)
	I1207 12:18:45.171610    3450 status.go:343] host is not running, skipping remaining checks
	I1207 12:18:45.171612    3450 status.go:257] multinode-554000 status: &{Name:multinode-554000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:361: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-554000 status --alsologtostderr": multinode-554000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:365: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-554000 status --alsologtostderr": multinode-554000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-554000 -n multinode-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-554000 -n multinode-554000: exit status 7 (31.853958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-554000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (0.16s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-554000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:382: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-554000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.183608833s)

                                                
                                                
-- stdout --
	* [multinode-554000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-554000 in cluster multinode-554000
	* Restarting existing qemu2 VM for "multinode-554000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-554000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:18:45.234369    3454 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:18:45.234523    3454 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:18:45.234525    3454 out.go:309] Setting ErrFile to fd 2...
	I1207 12:18:45.234528    3454 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:18:45.234656    3454 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:18:45.235632    3454 out.go:303] Setting JSON to false
	I1207 12:18:45.251513    3454 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2896,"bootTime":1701977429,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:18:45.251609    3454 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:18:45.256392    3454 out.go:177] * [multinode-554000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:18:45.262429    3454 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:18:45.266406    3454 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:18:45.262471    3454 notify.go:220] Checking for updates...
	I1207 12:18:45.272301    3454 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:18:45.275351    3454 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:18:45.278375    3454 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:18:45.281301    3454 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:18:45.284665    3454 config.go:182] Loaded profile config "multinode-554000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:18:45.284939    3454 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:18:45.289348    3454 out.go:177] * Using the qemu2 driver based on existing profile
	I1207 12:18:45.296343    3454 start.go:298] selected driver: qemu2
	I1207 12:18:45.296351    3454 start.go:902] validating driver "qemu2" against &{Name:multinode-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:multinode-554000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:18:45.296410    3454 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:18:45.298756    3454 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 12:18:45.298816    3454 cni.go:84] Creating CNI manager for ""
	I1207 12:18:45.298820    3454 cni.go:136] 1 nodes found, recommending kindnet
	I1207 12:18:45.298826    3454 start_flags.go:323] config:
	{Name:multinode-554000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-554000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:18:45.303003    3454 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:18:45.308270    3454 out.go:177] * Starting control plane node multinode-554000 in cluster multinode-554000
	I1207 12:18:45.312333    3454 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 12:18:45.312347    3454 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1207 12:18:45.312358    3454 cache.go:56] Caching tarball of preloaded images
	I1207 12:18:45.312399    3454 preload.go:174] Found /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 12:18:45.312404    3454 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1207 12:18:45.312478    3454 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/multinode-554000/config.json ...
	I1207 12:18:45.312887    3454 start.go:365] acquiring machines lock for multinode-554000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:18:45.312914    3454 start.go:369] acquired machines lock for "multinode-554000" in 19.458µs
	I1207 12:18:45.312924    3454 start.go:96] Skipping create...Using existing machine configuration
	I1207 12:18:45.312929    3454 fix.go:54] fixHost starting: 
	I1207 12:18:45.313050    3454 fix.go:102] recreateIfNeeded on multinode-554000: state=Stopped err=<nil>
	W1207 12:18:45.313058    3454 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 12:18:45.321325    3454 out.go:177] * Restarting existing qemu2 VM for "multinode-554000" ...
	I1207 12:18:45.325252    3454 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:58:8c:19:bd:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/disk.qcow2
	I1207 12:18:45.327214    3454 main.go:141] libmachine: STDOUT: 
	I1207 12:18:45.327235    3454 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:18:45.327260    3454 fix.go:56] fixHost completed within 14.32925ms
	I1207 12:18:45.327263    3454 start.go:83] releasing machines lock for "multinode-554000", held for 14.344ms
	W1207 12:18:45.327271    3454 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:18:45.327300    3454 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:18:45.327304    3454 start.go:709] Will try again in 5 seconds ...
	I1207 12:18:50.329448    3454 start.go:365] acquiring machines lock for multinode-554000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:18:50.329772    3454 start.go:369] acquired machines lock for "multinode-554000" in 243.75µs
	I1207 12:18:50.329881    3454 start.go:96] Skipping create...Using existing machine configuration
	I1207 12:18:50.329900    3454 fix.go:54] fixHost starting: 
	I1207 12:18:50.330574    3454 fix.go:102] recreateIfNeeded on multinode-554000: state=Stopped err=<nil>
	W1207 12:18:50.330605    3454 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 12:18:50.338969    3454 out.go:177] * Restarting existing qemu2 VM for "multinode-554000" ...
	I1207 12:18:50.343256    3454 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:58:8c:19:bd:49 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/multinode-554000/disk.qcow2
	I1207 12:18:50.352578    3454 main.go:141] libmachine: STDOUT: 
	I1207 12:18:50.352647    3454 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:18:50.352716    3454 fix.go:56] fixHost completed within 22.818916ms
	I1207 12:18:50.352731    3454 start.go:83] releasing machines lock for "multinode-554000", held for 22.939667ms
	W1207 12:18:50.352921    3454 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-554000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-554000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:18:50.359074    3454 out.go:177] 
	W1207 12:18:50.363138    3454 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:18:50.363230    3454 out.go:239] * 
	* 
	W1207 12:18:50.366078    3454 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:18:50.374008    3454 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:384: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-554000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-554000 -n multinode-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-554000 -n multinode-554000: exit status 7 (68.934958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-554000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (19.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-554000
multinode_test.go:480: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-554000-m01 --driver=qemu2 
E1207 12:18:53.155904    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/client.crt: no such file or directory
E1207 12:18:53.161874    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/client.crt: no such file or directory
E1207 12:18:53.174042    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/client.crt: no such file or directory
E1207 12:18:53.196195    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/client.crt: no such file or directory
E1207 12:18:53.238149    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/client.crt: no such file or directory
E1207 12:18:53.320337    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/client.crt: no such file or directory
E1207 12:18:53.482527    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/client.crt: no such file or directory
E1207 12:18:53.804733    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/client.crt: no such file or directory
E1207 12:18:54.446592    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/client.crt: no such file or directory
E1207 12:18:55.728924    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/client.crt: no such file or directory
E1207 12:18:58.291108    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/client.crt: no such file or directory
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-554000-m01 --driver=qemu2 : exit status 80 (9.746073958s)

                                                
                                                
-- stdout --
	* [multinode-554000-m01] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-554000-m01 in cluster multinode-554000-m01
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-554000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-554000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-554000-m02 --driver=qemu2 
E1207 12:19:03.413199    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/client.crt: no such file or directory
E1207 12:19:09.398284    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/client.crt: no such file or directory
multinode_test.go:488: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-554000-m02 --driver=qemu2 : exit status 80 (9.750535708s)

                                                
                                                
-- stdout --
	* [multinode-554000-m02] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-554000-m02 in cluster multinode-554000-m02
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-554000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-554000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:490: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-554000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:495: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-554000
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-554000: exit status 89 (80.182917ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-554000"

                                                
                                                
-- /stdout --
multinode_test.go:500: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-554000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-554000 -n multinode-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-554000 -n multinode-554000: exit status 7 (32.546833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-554000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (19.75s)

                                                
                                    
x
+
TestPreload (10.04s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-790000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
E1207 12:19:13.653801    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/client.crt: no such file or directory
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-790000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.867309333s)

                                                
                                                
-- stdout --
	* [test-preload-790000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node test-preload-790000 in cluster test-preload-790000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-790000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:19:10.372246    3526 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:19:10.372405    3526 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:19:10.372408    3526 out.go:309] Setting ErrFile to fd 2...
	I1207 12:19:10.372411    3526 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:19:10.372555    3526 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:19:10.373547    3526 out.go:303] Setting JSON to false
	I1207 12:19:10.389416    3526 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2921,"bootTime":1701977429,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:19:10.389502    3526 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:19:10.395634    3526 out.go:177] * [test-preload-790000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:19:10.402594    3526 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:19:10.407566    3526 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:19:10.402645    3526 notify.go:220] Checking for updates...
	I1207 12:19:10.410571    3526 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:19:10.413626    3526 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:19:10.416604    3526 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:19:10.419578    3526 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:19:10.422992    3526 config.go:182] Loaded profile config "multinode-554000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:19:10.423037    3526 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:19:10.427602    3526 out.go:177] * Using the qemu2 driver based on user configuration
	I1207 12:19:10.434553    3526 start.go:298] selected driver: qemu2
	I1207 12:19:10.434559    3526 start.go:902] validating driver "qemu2" against <nil>
	I1207 12:19:10.434565    3526 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:19:10.436800    3526 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 12:19:10.440581    3526 out.go:177] * Automatically selected the socket_vmnet network
	I1207 12:19:10.442154    3526 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 12:19:10.442194    3526 cni.go:84] Creating CNI manager for ""
	I1207 12:19:10.442201    3526 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 12:19:10.442205    3526 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1207 12:19:10.442211    3526 start_flags.go:323] config:
	{Name:test-preload-790000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-790000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock:
SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:19:10.446403    3526 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:19:10.453632    3526 out.go:177] * Starting control plane node test-preload-790000 in cluster test-preload-790000
	I1207 12:19:10.457578    3526 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I1207 12:19:10.457682    3526 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/test-preload-790000/config.json ...
	I1207 12:19:10.457698    3526 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/test-preload-790000/config.json: {Name:mk949d75d608f20383f0b3b2d85b99ee0f406ddd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:19:10.457696    3526 cache.go:107] acquiring lock: {Name:mk3f96b08734e915c8375cc942f6a715aea6e4a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:19:10.457707    3526 cache.go:107] acquiring lock: {Name:mkddf3dce2c990633eec184898b526fb432bbf7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:19:10.457707    3526 cache.go:107] acquiring lock: {Name:mk98c5e5d2e7f55fd366a5f8696822cd3c6024d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:19:10.457922    3526 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1207 12:19:10.457924    3526 cache.go:107] acquiring lock: {Name:mkc6adbc28e20273c991e467d009b7c114432b9e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:19:10.457940    3526 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 12:19:10.457909    3526 cache.go:107] acquiring lock: {Name:mk8686b42b1e688e7bfee63ce1b18252b396fbc2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:19:10.457924    3526 cache.go:107] acquiring lock: {Name:mk2a9bba12111bcde314a267feb3f15f050e4f02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:19:10.457923    3526 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1207 12:19:10.457963    3526 cache.go:107] acquiring lock: {Name:mk72f3a1eb1831920d7dc650c436d13a688f3565 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:19:10.458086    3526 cache.go:107] acquiring lock: {Name:mk983d8df2b154637a76d29f3629add08f5d0bcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:19:10.458113    3526 start.go:365] acquiring machines lock for test-preload-790000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:19:10.458168    3526 start.go:369] acquired machines lock for "test-preload-790000" in 47.833µs
	I1207 12:19:10.458180    3526 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I1207 12:19:10.458180    3526 start.go:93] Provisioning new machine with config: &{Name:test-preload-790000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-790000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:19:10.458213    3526 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:19:10.465563    3526 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1207 12:19:10.458258    3526 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1207 12:19:10.458261    3526 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1207 12:19:10.458281    3526 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1207 12:19:10.458308    3526 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1207 12:19:10.468871    3526 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1207 12:19:10.468945    3526 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1207 12:19:10.468985    3526 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1207 12:19:10.471974    3526 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1207 12:19:10.472021    3526 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1207 12:19:10.472103    3526 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1207 12:19:10.472113    3526 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1207 12:19:10.472185    3526 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1207 12:19:10.482661    3526 start.go:159] libmachine.API.Create for "test-preload-790000" (driver="qemu2")
	I1207 12:19:10.482683    3526 client.go:168] LocalClient.Create starting
	I1207 12:19:10.482759    3526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:19:10.482791    3526 main.go:141] libmachine: Decoding PEM data...
	I1207 12:19:10.482799    3526 main.go:141] libmachine: Parsing certificate...
	I1207 12:19:10.482836    3526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:19:10.482859    3526 main.go:141] libmachine: Decoding PEM data...
	I1207 12:19:10.482867    3526 main.go:141] libmachine: Parsing certificate...
	I1207 12:19:10.483188    3526 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:19:10.606516    3526 main.go:141] libmachine: Creating SSH key...
	I1207 12:19:10.833811    3526 main.go:141] libmachine: Creating Disk image...
	I1207 12:19:10.833837    3526 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:19:10.834035    3526 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/test-preload-790000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/test-preload-790000/disk.qcow2
	I1207 12:19:10.846296    3526 main.go:141] libmachine: STDOUT: 
	I1207 12:19:10.846315    3526 main.go:141] libmachine: STDERR: 
	I1207 12:19:10.846372    3526 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/test-preload-790000/disk.qcow2 +20000M
	I1207 12:19:10.856879    3526 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:19:10.856897    3526 main.go:141] libmachine: STDERR: 
	I1207 12:19:10.856915    3526 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/test-preload-790000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/test-preload-790000/disk.qcow2
	I1207 12:19:10.856923    3526 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:19:10.856952    3526 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/test-preload-790000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/test-preload-790000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/test-preload-790000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:ff:01:4e:73:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/test-preload-790000/disk.qcow2
	I1207 12:19:10.858801    3526 main.go:141] libmachine: STDOUT: 
	I1207 12:19:10.858817    3526 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:19:10.858836    3526 client.go:171] LocalClient.Create took 376.154625ms
	I1207 12:19:11.195908    3526 cache.go:162] opening:  /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I1207 12:19:11.209774    3526 cache.go:162] opening:  /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I1207 12:19:11.212117    3526 cache.go:162] opening:  /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1207 12:19:11.215778    3526 cache.go:162] opening:  /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	W1207 12:19:11.226545    3526 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1207 12:19:11.226630    3526 cache.go:162] opening:  /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1207 12:19:11.234007    3526 cache.go:162] opening:  /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I1207 12:19:11.248929    3526 cache.go:162] opening:  /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1207 12:19:11.355685    3526 cache.go:157] /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I1207 12:19:11.355728    3526 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 897.86175ms
	I1207 12:19:11.355779    3526 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W1207 12:19:11.518884    3526 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1207 12:19:11.518989    3526 cache.go:162] opening:  /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1207 12:19:12.143926    3526 cache.go:157] /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1207 12:19:12.144013    3526 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.68631375s
	I1207 12:19:12.144056    3526 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1207 12:19:12.859236    3526 start.go:128] duration metric: createHost completed in 2.401025125s
	I1207 12:19:12.859314    3526 start.go:83] releasing machines lock for "test-preload-790000", held for 2.401178625s
	W1207 12:19:12.859381    3526 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:19:12.874759    3526 out.go:177] * Deleting "test-preload-790000" in qemu2 ...
	W1207 12:19:12.899415    3526 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:19:12.899450    3526 start.go:709] Will try again in 5 seconds ...
	I1207 12:19:13.756160    3526 cache.go:157] /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I1207 12:19:13.756203    3526 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.298361s
	I1207 12:19:13.756233    3526 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I1207 12:19:14.708193    3526 cache.go:157] /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I1207 12:19:14.708243    3526 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 4.250405792s
	I1207 12:19:14.708305    3526 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I1207 12:19:14.746546    3526 cache.go:157] /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I1207 12:19:14.746588    3526 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.288969625s
	I1207 12:19:14.746614    3526 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I1207 12:19:15.138612    3526 cache.go:157] /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I1207 12:19:15.138656    3526 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.681034791s
	I1207 12:19:15.138684    3526 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I1207 12:19:16.857219    3526 cache.go:157] /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I1207 12:19:16.857265    3526 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.399286083s
	I1207 12:19:16.857295    3526 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I1207 12:19:17.899587    3526 start.go:365] acquiring machines lock for test-preload-790000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:19:17.899733    3526 start.go:369] acquired machines lock for "test-preload-790000" in 117.958µs
	I1207 12:19:17.899775    3526 start.go:93] Provisioning new machine with config: &{Name:test-preload-790000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-790000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:19:17.899857    3526 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:19:17.910109    3526 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1207 12:19:17.937954    3526 start.go:159] libmachine.API.Create for "test-preload-790000" (driver="qemu2")
	I1207 12:19:17.937989    3526 client.go:168] LocalClient.Create starting
	I1207 12:19:17.938088    3526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:19:17.938129    3526 main.go:141] libmachine: Decoding PEM data...
	I1207 12:19:17.938142    3526 main.go:141] libmachine: Parsing certificate...
	I1207 12:19:17.938191    3526 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:19:17.938220    3526 main.go:141] libmachine: Decoding PEM data...
	I1207 12:19:17.938230    3526 main.go:141] libmachine: Parsing certificate...
	I1207 12:19:17.938606    3526 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:19:18.064945    3526 main.go:141] libmachine: Creating SSH key...
	I1207 12:19:18.138247    3526 main.go:141] libmachine: Creating Disk image...
	I1207 12:19:18.138253    3526 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:19:18.138415    3526 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/test-preload-790000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/test-preload-790000/disk.qcow2
	I1207 12:19:18.150610    3526 main.go:141] libmachine: STDOUT: 
	I1207 12:19:18.150631    3526 main.go:141] libmachine: STDERR: 
	I1207 12:19:18.150697    3526 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/test-preload-790000/disk.qcow2 +20000M
	I1207 12:19:18.161698    3526 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:19:18.161715    3526 main.go:141] libmachine: STDERR: 
	I1207 12:19:18.161727    3526 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/test-preload-790000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/test-preload-790000/disk.qcow2
	I1207 12:19:18.161738    3526 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:19:18.161767    3526 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/test-preload-790000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/test-preload-790000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/test-preload-790000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:e1:5e:8c:4c:95 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/test-preload-790000/disk.qcow2
	I1207 12:19:18.163579    3526 main.go:141] libmachine: STDOUT: 
	I1207 12:19:18.163607    3526 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:19:18.163618    3526 client.go:171] LocalClient.Create took 225.628083ms
	I1207 12:19:19.688010    3526 cache.go:157] /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I1207 12:19:19.688087    3526 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.230380083s
	I1207 12:19:19.688121    3526 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I1207 12:19:19.688182    3526 cache.go:87] Successfully saved all images to host disk.
	I1207 12:19:20.165771    3526 start.go:128] duration metric: createHost completed in 2.265914125s
	I1207 12:19:20.165817    3526 start.go:83] releasing machines lock for "test-preload-790000", held for 2.266109334s
	W1207 12:19:20.166091    3526 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-790000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-790000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:19:20.176797    3526 out.go:177] 
	W1207 12:19:20.180854    3526 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:19:20.180910    3526 out.go:239] * 
	* 
	W1207 12:19:20.183440    3526 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:19:20.192599    3526 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-790000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:523: *** TestPreload FAILED at 2023-12-07 12:19:20.212015 -0800 PST m=+1144.564912251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-790000 -n test-preload-790000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-790000 -n test-preload-790000: exit status 7 (66.318542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-790000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-790000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-790000
--- FAIL: TestPreload (10.04s)

                                                
                                    
x
+
TestScheduledStopUnix (10.18s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-387000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-387000 --memory=2048 --driver=qemu2 : exit status 80 (10.009048292s)

                                                
                                                
-- stdout --
	* [scheduled-stop-387000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-387000 in cluster scheduled-stop-387000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-387000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-387000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-387000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-387000 in cluster scheduled-stop-387000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-387000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-387000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:523: *** TestScheduledStopUnix FAILED at 2023-12-07 12:19:30.389983 -0800 PST m=+1154.743063376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-387000 -n scheduled-stop-387000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-387000 -n scheduled-stop-387000: exit status 7 (70.387459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-387000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-387000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-387000
--- FAIL: TestScheduledStopUnix (10.18s)

                                                
                                    
x
+
TestSkaffold (11.9s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe1309173609 version
skaffold_test.go:63: skaffold version: v2.9.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-568000 --memory=2600 --driver=qemu2 
E1207 12:19:34.136158    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-568000 --memory=2600 --driver=qemu2 : exit status 80 (9.6942935s)

                                                
                                                
-- stdout --
	* [skaffold-568000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-568000 in cluster skaffold-568000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-568000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-568000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-568000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-568000 in cluster skaffold-568000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-568000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-568000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:523: *** TestSkaffold FAILED at 2023-12-07 12:19:42.293849 -0800 PST m=+1166.647147293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-568000 -n skaffold-568000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-568000 -n skaffold-568000: exit status 7 (65.870875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-568000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-568000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-568000
--- FAIL: TestSkaffold (11.90s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (155.81s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
E1207 12:20:52.857091    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/functional-469000/client.crt: no such file or directory
E1207 12:21:20.563486    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/functional-469000/client.crt: no such file or directory
E1207 12:21:37.018148    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/client.crt: no such file or directory
version_upgrade_test.go:107: v1.6.2 release installation failed: bad response code: 404
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-12-07 12:22:58.583396 -0800 PST m=+1362.940338501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-696000 -n running-upgrade-696000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-696000 -n running-upgrade-696000: exit status 85 (89.411417ms)

                                                
                                                
-- stdout --
	* Profile "running-upgrade-696000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p running-upgrade-696000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "running-upgrade-696000" host is not running, skipping log retrieval (state="* Profile \"running-upgrade-696000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p running-upgrade-696000\"")
helpers_test.go:175: Cleaning up "running-upgrade-696000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-696000
--- FAIL: TestRunningBinaryUpgrade (155.81s)

                                                
                                    
x
+
TestKubernetesUpgrade (15.59s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-923000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:235: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-923000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (10.059457916s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-923000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubernetes-upgrade-923000 in cluster kubernetes-upgrade-923000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-923000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:22:58.932005    4043 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:22:58.932172    4043 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:22:58.932175    4043 out.go:309] Setting ErrFile to fd 2...
	I1207 12:22:58.932177    4043 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:22:58.932291    4043 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:22:58.933341    4043 out.go:303] Setting JSON to false
	I1207 12:22:58.949223    4043 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3149,"bootTime":1701977429,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:22:58.949317    4043 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:22:58.953860    4043 out.go:177] * [kubernetes-upgrade-923000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:22:58.960823    4043 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:22:58.963905    4043 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:22:58.960878    4043 notify.go:220] Checking for updates...
	I1207 12:22:58.970806    4043 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:22:58.973799    4043 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:22:58.976734    4043 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:22:58.979792    4043 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:22:58.983100    4043 config.go:182] Loaded profile config "cert-expiration-552000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:22:58.983162    4043 config.go:182] Loaded profile config "multinode-554000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:22:58.983205    4043 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:22:58.987754    4043 out.go:177] * Using the qemu2 driver based on user configuration
	I1207 12:22:58.994794    4043 start.go:298] selected driver: qemu2
	I1207 12:22:58.994802    4043 start.go:902] validating driver "qemu2" against <nil>
	I1207 12:22:58.994807    4043 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:22:58.997174    4043 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 12:22:59.000781    4043 out.go:177] * Automatically selected the socket_vmnet network
	I1207 12:22:59.003825    4043 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1207 12:22:59.003859    4043 cni.go:84] Creating CNI manager for ""
	I1207 12:22:59.003867    4043 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1207 12:22:59.003871    4043 start_flags.go:323] config:
	{Name:kubernetes-upgrade-923000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-923000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:22:59.008149    4043 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:22:59.013803    4043 out.go:177] * Starting control plane node kubernetes-upgrade-923000 in cluster kubernetes-upgrade-923000
	I1207 12:22:59.017845    4043 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1207 12:22:59.017859    4043 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1207 12:22:59.017870    4043 cache.go:56] Caching tarball of preloaded images
	I1207 12:22:59.017929    4043 preload.go:174] Found /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 12:22:59.017935    4043 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1207 12:22:59.017992    4043 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/kubernetes-upgrade-923000/config.json ...
	I1207 12:22:59.018002    4043 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/kubernetes-upgrade-923000/config.json: {Name:mkea06d330e15387475dbad2a32ae9689cca6de8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:22:59.018202    4043 start.go:365] acquiring machines lock for kubernetes-upgrade-923000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:22:59.018233    4043 start.go:369] acquired machines lock for "kubernetes-upgrade-923000" in 23.542µs
	I1207 12:22:59.018244    4043 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-923000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-923000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:22:59.018271    4043 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:22:59.025793    4043 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1207 12:22:59.041794    4043 start.go:159] libmachine.API.Create for "kubernetes-upgrade-923000" (driver="qemu2")
	I1207 12:22:59.041826    4043 client.go:168] LocalClient.Create starting
	I1207 12:22:59.041887    4043 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:22:59.041918    4043 main.go:141] libmachine: Decoding PEM data...
	I1207 12:22:59.041929    4043 main.go:141] libmachine: Parsing certificate...
	I1207 12:22:59.041972    4043 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:22:59.041993    4043 main.go:141] libmachine: Decoding PEM data...
	I1207 12:22:59.042001    4043 main.go:141] libmachine: Parsing certificate...
	I1207 12:22:59.042354    4043 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:22:59.167896    4043 main.go:141] libmachine: Creating SSH key...
	I1207 12:22:59.217738    4043 main.go:141] libmachine: Creating Disk image...
	I1207 12:22:59.217743    4043 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:22:59.217935    4043 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubernetes-upgrade-923000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubernetes-upgrade-923000/disk.qcow2
	I1207 12:22:59.229908    4043 main.go:141] libmachine: STDOUT: 
	I1207 12:22:59.229928    4043 main.go:141] libmachine: STDERR: 
	I1207 12:22:59.229995    4043 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubernetes-upgrade-923000/disk.qcow2 +20000M
	I1207 12:22:59.240312    4043 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:22:59.240325    4043 main.go:141] libmachine: STDERR: 
	I1207 12:22:59.240340    4043 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubernetes-upgrade-923000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubernetes-upgrade-923000/disk.qcow2
	I1207 12:22:59.240344    4043 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:22:59.240388    4043 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubernetes-upgrade-923000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubernetes-upgrade-923000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubernetes-upgrade-923000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:13:7c:ba:44:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubernetes-upgrade-923000/disk.qcow2
	I1207 12:22:59.242003    4043 main.go:141] libmachine: STDOUT: 
	I1207 12:22:59.242019    4043 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:22:59.242036    4043 client.go:171] LocalClient.Create took 200.207709ms
	I1207 12:23:01.244204    4043 start.go:128] duration metric: createHost completed in 2.225951916s
	I1207 12:23:01.244304    4043 start.go:83] releasing machines lock for "kubernetes-upgrade-923000", held for 2.226066458s
	W1207 12:23:01.244360    4043 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:23:01.255507    4043 out.go:177] * Deleting "kubernetes-upgrade-923000" in qemu2 ...
	W1207 12:23:01.282295    4043 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:23:01.282328    4043 start.go:709] Will try again in 5 seconds ...
	I1207 12:23:06.284447    4043 start.go:365] acquiring machines lock for kubernetes-upgrade-923000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:23:06.284734    4043 start.go:369] acquired machines lock for "kubernetes-upgrade-923000" in 202.166µs
	I1207 12:23:06.284816    4043 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-923000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-923000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:23:06.285003    4043 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:23:06.293461    4043 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1207 12:23:06.336623    4043 start.go:159] libmachine.API.Create for "kubernetes-upgrade-923000" (driver="qemu2")
	I1207 12:23:06.336686    4043 client.go:168] LocalClient.Create starting
	I1207 12:23:06.336934    4043 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:23:06.337021    4043 main.go:141] libmachine: Decoding PEM data...
	I1207 12:23:06.337045    4043 main.go:141] libmachine: Parsing certificate...
	I1207 12:23:06.337121    4043 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:23:06.337171    4043 main.go:141] libmachine: Decoding PEM data...
	I1207 12:23:06.337193    4043 main.go:141] libmachine: Parsing certificate...
	I1207 12:23:06.337745    4043 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:23:06.475859    4043 main.go:141] libmachine: Creating SSH key...
	I1207 12:23:06.893691    4043 main.go:141] libmachine: Creating Disk image...
	I1207 12:23:06.893706    4043 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:23:06.893977    4043 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubernetes-upgrade-923000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubernetes-upgrade-923000/disk.qcow2
	I1207 12:23:06.906904    4043 main.go:141] libmachine: STDOUT: 
	I1207 12:23:06.906937    4043 main.go:141] libmachine: STDERR: 
	I1207 12:23:06.907006    4043 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubernetes-upgrade-923000/disk.qcow2 +20000M
	I1207 12:23:06.917706    4043 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:23:06.917723    4043 main.go:141] libmachine: STDERR: 
	I1207 12:23:06.917744    4043 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubernetes-upgrade-923000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubernetes-upgrade-923000/disk.qcow2
	I1207 12:23:06.917750    4043 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:23:06.917808    4043 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubernetes-upgrade-923000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubernetes-upgrade-923000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubernetes-upgrade-923000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:e6:2a:9a:fb:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubernetes-upgrade-923000/disk.qcow2
	I1207 12:23:06.919526    4043 main.go:141] libmachine: STDOUT: 
	I1207 12:23:06.919541    4043 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:23:06.919555    4043 client.go:171] LocalClient.Create took 582.8745ms
	I1207 12:23:08.920228    4043 start.go:128] duration metric: createHost completed in 2.63524475s
	I1207 12:23:08.920323    4043 start.go:83] releasing machines lock for "kubernetes-upgrade-923000", held for 2.635591958s
	W1207 12:23:08.920717    4043 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-923000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-923000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:23:08.933366    4043 out.go:177] 
	W1207 12:23:08.937487    4043 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:23:08.937580    4043 out.go:239] * 
	* 
	W1207 12:23:08.940232    4043 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:23:08.947341    4043 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:237: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-923000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:240: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-923000
version_upgrade_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-923000 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-923000 status --format={{.Host}}: exit status 7 (35.93175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-923000 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-923000 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.185730459s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-923000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-923000 in cluster kubernetes-upgrade-923000
	* Restarting existing qemu2 VM for "kubernetes-upgrade-923000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-923000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:23:09.130001    4064 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:23:09.130145    4064 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:23:09.130148    4064 out.go:309] Setting ErrFile to fd 2...
	I1207 12:23:09.130150    4064 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:23:09.130279    4064 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:23:09.131296    4064 out.go:303] Setting JSON to false
	I1207 12:23:09.147057    4064 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3160,"bootTime":1701977429,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:23:09.147156    4064 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:23:09.152475    4064 out.go:177] * [kubernetes-upgrade-923000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:23:09.159625    4064 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:23:09.162572    4064 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:23:09.159664    4064 notify.go:220] Checking for updates...
	I1207 12:23:09.169592    4064 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:23:09.172567    4064 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:23:09.175563    4064 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:23:09.178592    4064 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:23:09.180325    4064 config.go:182] Loaded profile config "kubernetes-upgrade-923000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1207 12:23:09.180574    4064 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:23:09.184578    4064 out.go:177] * Using the qemu2 driver based on existing profile
	I1207 12:23:09.195588    4064 start.go:298] selected driver: qemu2
	I1207 12:23:09.195596    4064 start.go:902] validating driver "qemu2" against &{Name:kubernetes-upgrade-923000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-923000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:23:09.195660    4064 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:23:09.198060    4064 cni.go:84] Creating CNI manager for ""
	I1207 12:23:09.198079    4064 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 12:23:09.198086    4064 start_flags.go:323] config:
	{Name:kubernetes-upgrade-923000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:kubernetes-upgrade-92300
0 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQ
emuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:23:09.202406    4064 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:23:09.210590    4064 out.go:177] * Starting control plane node kubernetes-upgrade-923000 in cluster kubernetes-upgrade-923000
	I1207 12:23:09.214562    4064 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime docker
	I1207 12:23:09.214580    4064 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4
	I1207 12:23:09.214590    4064 cache.go:56] Caching tarball of preloaded images
	I1207 12:23:09.214658    4064 preload.go:174] Found /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 12:23:09.214664    4064 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.1 on docker
	I1207 12:23:09.214730    4064 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/kubernetes-upgrade-923000/config.json ...
	I1207 12:23:09.215285    4064 start.go:365] acquiring machines lock for kubernetes-upgrade-923000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:23:09.215311    4064 start.go:369] acquired machines lock for "kubernetes-upgrade-923000" in 19.5µs
	I1207 12:23:09.215318    4064 start.go:96] Skipping create...Using existing machine configuration
	I1207 12:23:09.215322    4064 fix.go:54] fixHost starting: 
	I1207 12:23:09.215435    4064 fix.go:102] recreateIfNeeded on kubernetes-upgrade-923000: state=Stopped err=<nil>
	W1207 12:23:09.215443    4064 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 12:23:09.222592    4064 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-923000" ...
	I1207 12:23:09.226673    4064 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubernetes-upgrade-923000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubernetes-upgrade-923000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubernetes-upgrade-923000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:e6:2a:9a:fb:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubernetes-upgrade-923000/disk.qcow2
	I1207 12:23:09.228767    4064 main.go:141] libmachine: STDOUT: 
	I1207 12:23:09.228786    4064 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:23:09.228817    4064 fix.go:56] fixHost completed within 13.492709ms
	I1207 12:23:09.228821    4064 start.go:83] releasing machines lock for "kubernetes-upgrade-923000", held for 13.507333ms
	W1207 12:23:09.228828    4064 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:23:09.228869    4064 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:23:09.228874    4064 start.go:709] Will try again in 5 seconds ...
	I1207 12:23:14.230981    4064 start.go:365] acquiring machines lock for kubernetes-upgrade-923000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:23:14.231321    4064 start.go:369] acquired machines lock for "kubernetes-upgrade-923000" in 235.916µs
	I1207 12:23:14.231428    4064 start.go:96] Skipping create...Using existing machine configuration
	I1207 12:23:14.231448    4064 fix.go:54] fixHost starting: 
	I1207 12:23:14.232137    4064 fix.go:102] recreateIfNeeded on kubernetes-upgrade-923000: state=Stopped err=<nil>
	W1207 12:23:14.232167    4064 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 12:23:14.241494    4064 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-923000" ...
	I1207 12:23:14.244803    4064 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubernetes-upgrade-923000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubernetes-upgrade-923000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubernetes-upgrade-923000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:e6:2a:9a:fb:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubernetes-upgrade-923000/disk.qcow2
	I1207 12:23:14.253879    4064 main.go:141] libmachine: STDOUT: 
	I1207 12:23:14.253963    4064 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:23:14.254061    4064 fix.go:56] fixHost completed within 22.610375ms
	I1207 12:23:14.254082    4064 start.go:83] releasing machines lock for "kubernetes-upgrade-923000", held for 22.739375ms
	W1207 12:23:14.254359    4064 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-923000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-923000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:23:14.259530    4064 out.go:177] 
	W1207 12:23:14.262600    4064 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:23:14.262623    4064 out.go:239] * 
	* 
	W1207 12:23:14.265224    4064 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:23:14.272467    4064 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:258: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-923000 --memory=2200 --kubernetes-version=v1.29.0-rc.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-923000 version --output=json
version_upgrade_test.go:261: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-923000 version --output=json: exit status 1 (63.022959ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-923000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:263: error running kubectl: exit status 1
panic.go:523: *** TestKubernetesUpgrade FAILED at 2023-12-07 12:23:14.351413 -0800 PST m=+1378.708648501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-923000 -n kubernetes-upgrade-923000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-923000 -n kubernetes-upgrade-923000: exit status 7 (35.504417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-923000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-923000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-923000
--- FAIL: TestKubernetesUpgrade (15.59s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.11s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.32.0 on darwin (arm64)
- MINIKUBE_LOCATION=17719
- KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1850625769/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.11s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.36s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.32.0 on darwin (arm64)
- MINIKUBE_LOCATION=17719
- KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current166808240/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (156.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
version_upgrade_test.go:168: v1.6.2 release installation failed: bad response code: 404
--- FAIL: TestStoppedBinaryUpgrade/Setup (156.14s)

                                                
                                    
x
+
TestPause/serial/Start (9.89s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-567000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-567000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.819560209s)

                                                
                                                
-- stdout --
	* [pause-567000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node pause-567000 in cluster pause-567000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-567000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-567000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-567000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-567000 -n pause-567000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-567000 -n pause-567000: exit status 7 (71.337833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-567000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-057000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-057000 --driver=qemu2 : exit status 80 (9.792003416s)

                                                
                                                
-- stdout --
	* [NoKubernetes-057000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node NoKubernetes-057000 in cluster NoKubernetes-057000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-057000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-057000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-057000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-057000 -n NoKubernetes-057000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-057000 -n NoKubernetes-057000: exit status 7 (69.824791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-057000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-057000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-057000 --no-kubernetes --driver=qemu2 : exit status 80 (5.243873375s)

                                                
                                                
-- stdout --
	* [NoKubernetes-057000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-057000
	* Restarting existing qemu2 VM for "NoKubernetes-057000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-057000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-057000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-057000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-057000 -n NoKubernetes-057000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-057000 -n NoKubernetes-057000: exit status 7 (73.711958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-057000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-057000 --no-kubernetes --driver=qemu2 
E1207 12:23:53.149330    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-057000 --no-kubernetes --driver=qemu2 : exit status 80 (5.242710041s)

                                                
                                                
-- stdout --
	* [NoKubernetes-057000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-057000
	* Restarting existing qemu2 VM for "NoKubernetes-057000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-057000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-057000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-057000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-057000 -n NoKubernetes-057000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-057000 -n NoKubernetes-057000: exit status 7 (72.866416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-057000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-057000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-057000 --driver=qemu2 : exit status 80 (5.234156583s)

                                                
                                                
-- stdout --
	* [NoKubernetes-057000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-057000
	* Restarting existing qemu2 VM for "NoKubernetes-057000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-057000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-057000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-057000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-057000 -n NoKubernetes-057000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-057000 -n NoKubernetes-057000: exit status 7 (71.307542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-057000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-676000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-676000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.777468167s)

                                                
                                                
-- stdout --
	* [auto-676000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node auto-676000 in cluster auto-676000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-676000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:23:59.621042    4216 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:23:59.621211    4216 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:23:59.621214    4216 out.go:309] Setting ErrFile to fd 2...
	I1207 12:23:59.621217    4216 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:23:59.621342    4216 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:23:59.622411    4216 out.go:303] Setting JSON to false
	I1207 12:23:59.638220    4216 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3210,"bootTime":1701977429,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:23:59.638309    4216 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:23:59.645297    4216 out.go:177] * [auto-676000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:23:59.653179    4216 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:23:59.653213    4216 notify.go:220] Checking for updates...
	I1207 12:23:59.657227    4216 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:23:59.660187    4216 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:23:59.663158    4216 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:23:59.666210    4216 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:23:59.667760    4216 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:23:59.671587    4216 config.go:182] Loaded profile config "multinode-554000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:23:59.671636    4216 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:23:59.676199    4216 out.go:177] * Using the qemu2 driver based on user configuration
	I1207 12:23:59.682161    4216 start.go:298] selected driver: qemu2
	I1207 12:23:59.682168    4216 start.go:902] validating driver "qemu2" against <nil>
	I1207 12:23:59.682174    4216 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:23:59.684454    4216 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 12:23:59.688267    4216 out.go:177] * Automatically selected the socket_vmnet network
	I1207 12:23:59.691328    4216 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 12:23:59.691387    4216 cni.go:84] Creating CNI manager for ""
	I1207 12:23:59.691396    4216 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 12:23:59.691400    4216 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1207 12:23:59.691406    4216 start_flags.go:323] config:
	{Name:auto-676000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-676000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0
AutoPauseInterval:1m0s GPUs:}
	I1207 12:23:59.695933    4216 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:23:59.703175    4216 out.go:177] * Starting control plane node auto-676000 in cluster auto-676000
	I1207 12:23:59.707186    4216 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 12:23:59.707203    4216 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1207 12:23:59.707215    4216 cache.go:56] Caching tarball of preloaded images
	I1207 12:23:59.707300    4216 preload.go:174] Found /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 12:23:59.707314    4216 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1207 12:23:59.707390    4216 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/auto-676000/config.json ...
	I1207 12:23:59.707405    4216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/auto-676000/config.json: {Name:mkf99b01f90246ce5e0efc85214d730d9940a046 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:23:59.707623    4216 start.go:365] acquiring machines lock for auto-676000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:23:59.707655    4216 start.go:369] acquired machines lock for "auto-676000" in 25.917µs
	I1207 12:23:59.707666    4216 start.go:93] Provisioning new machine with config: &{Name:auto-676000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:auto-676000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:23:59.707708    4216 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:23:59.716175    4216 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1207 12:23:59.733485    4216 start.go:159] libmachine.API.Create for "auto-676000" (driver="qemu2")
	I1207 12:23:59.733511    4216 client.go:168] LocalClient.Create starting
	I1207 12:23:59.733571    4216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:23:59.733597    4216 main.go:141] libmachine: Decoding PEM data...
	I1207 12:23:59.733618    4216 main.go:141] libmachine: Parsing certificate...
	I1207 12:23:59.733662    4216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:23:59.733684    4216 main.go:141] libmachine: Decoding PEM data...
	I1207 12:23:59.733693    4216 main.go:141] libmachine: Parsing certificate...
	I1207 12:23:59.734064    4216 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:23:59.861216    4216 main.go:141] libmachine: Creating SSH key...
	I1207 12:23:59.976736    4216 main.go:141] libmachine: Creating Disk image...
	I1207 12:23:59.976741    4216 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:23:59.976922    4216 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/auto-676000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/auto-676000/disk.qcow2
	I1207 12:23:59.989088    4216 main.go:141] libmachine: STDOUT: 
	I1207 12:23:59.989109    4216 main.go:141] libmachine: STDERR: 
	I1207 12:23:59.989163    4216 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/auto-676000/disk.qcow2 +20000M
	I1207 12:23:59.999730    4216 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:23:59.999746    4216 main.go:141] libmachine: STDERR: 
	I1207 12:23:59.999763    4216 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/auto-676000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/auto-676000/disk.qcow2
	I1207 12:23:59.999770    4216 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:23:59.999815    4216 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/auto-676000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/auto-676000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/auto-676000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:0c:58:ab:46:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/auto-676000/disk.qcow2
	I1207 12:24:00.001492    4216 main.go:141] libmachine: STDOUT: 
	I1207 12:24:00.001515    4216 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:24:00.001536    4216 client.go:171] LocalClient.Create took 268.023792ms
	I1207 12:24:02.003680    4216 start.go:128] duration metric: createHost completed in 2.295997125s
	I1207 12:24:02.003751    4216 start.go:83] releasing machines lock for "auto-676000", held for 2.29613s
	W1207 12:24:02.003843    4216 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:24:02.017086    4216 out.go:177] * Deleting "auto-676000" in qemu2 ...
	W1207 12:24:02.040253    4216 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:24:02.040281    4216 start.go:709] Will try again in 5 seconds ...
	I1207 12:24:07.042482    4216 start.go:365] acquiring machines lock for auto-676000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:24:07.042915    4216 start.go:369] acquired machines lock for "auto-676000" in 318.666µs
	I1207 12:24:07.043064    4216 start.go:93] Provisioning new machine with config: &{Name:auto-676000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:auto-676000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:24:07.043314    4216 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:24:07.049086    4216 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1207 12:24:07.097293    4216 start.go:159] libmachine.API.Create for "auto-676000" (driver="qemu2")
	I1207 12:24:07.097355    4216 client.go:168] LocalClient.Create starting
	I1207 12:24:07.097465    4216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:24:07.097535    4216 main.go:141] libmachine: Decoding PEM data...
	I1207 12:24:07.097550    4216 main.go:141] libmachine: Parsing certificate...
	I1207 12:24:07.097606    4216 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:24:07.097647    4216 main.go:141] libmachine: Decoding PEM data...
	I1207 12:24:07.097662    4216 main.go:141] libmachine: Parsing certificate...
	I1207 12:24:07.098304    4216 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:24:07.235845    4216 main.go:141] libmachine: Creating SSH key...
	I1207 12:24:07.293079    4216 main.go:141] libmachine: Creating Disk image...
	I1207 12:24:07.293084    4216 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:24:07.293260    4216 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/auto-676000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/auto-676000/disk.qcow2
	I1207 12:24:07.305728    4216 main.go:141] libmachine: STDOUT: 
	I1207 12:24:07.305749    4216 main.go:141] libmachine: STDERR: 
	I1207 12:24:07.305813    4216 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/auto-676000/disk.qcow2 +20000M
	I1207 12:24:07.316305    4216 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:24:07.316321    4216 main.go:141] libmachine: STDERR: 
	I1207 12:24:07.316338    4216 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/auto-676000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/auto-676000/disk.qcow2
	I1207 12:24:07.316343    4216 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:24:07.316399    4216 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/auto-676000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/auto-676000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/auto-676000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:58:c2:57:77:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/auto-676000/disk.qcow2
	I1207 12:24:07.318074    4216 main.go:141] libmachine: STDOUT: 
	I1207 12:24:07.318089    4216 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:24:07.318102    4216 client.go:171] LocalClient.Create took 220.743167ms
	I1207 12:24:09.320256    4216 start.go:128] duration metric: createHost completed in 2.276941625s
	I1207 12:24:09.320332    4216 start.go:83] releasing machines lock for "auto-676000", held for 2.277435167s
	W1207 12:24:09.320951    4216 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-676000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-676000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:24:09.334608    4216 out.go:177] 
	W1207 12:24:09.339627    4216 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:24:09.339701    4216 out.go:239] * 
	* 
	W1207 12:24:09.342831    4216 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:24:09.352585    4216 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-676000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
E1207 12:24:20.857716    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/ingress-addon-legacy-427000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-676000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.953375375s)

                                                
                                                
-- stdout --
	* [kindnet-676000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kindnet-676000 in cluster kindnet-676000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-676000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:24:11.589352    4335 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:24:11.589538    4335 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:24:11.589541    4335 out.go:309] Setting ErrFile to fd 2...
	I1207 12:24:11.589543    4335 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:24:11.589672    4335 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:24:11.590672    4335 out.go:303] Setting JSON to false
	I1207 12:24:11.606577    4335 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3222,"bootTime":1701977429,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:24:11.606673    4335 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:24:11.612619    4335 out.go:177] * [kindnet-676000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:24:11.620548    4335 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:24:11.620576    4335 notify.go:220] Checking for updates...
	I1207 12:24:11.625764    4335 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:24:11.628574    4335 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:24:11.631586    4335 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:24:11.634557    4335 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:24:11.637551    4335 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:24:11.640964    4335 config.go:182] Loaded profile config "multinode-554000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:24:11.641008    4335 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:24:11.645563    4335 out.go:177] * Using the qemu2 driver based on user configuration
	I1207 12:24:11.652523    4335 start.go:298] selected driver: qemu2
	I1207 12:24:11.652529    4335 start.go:902] validating driver "qemu2" against <nil>
	I1207 12:24:11.652535    4335 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:24:11.654835    4335 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 12:24:11.657584    4335 out.go:177] * Automatically selected the socket_vmnet network
	I1207 12:24:11.660572    4335 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 12:24:11.660631    4335 cni.go:84] Creating CNI manager for "kindnet"
	I1207 12:24:11.660635    4335 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1207 12:24:11.660644    4335 start_flags.go:323] config:
	{Name:kindnet-676000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-676000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:24:11.665295    4335 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:24:11.672428    4335 out.go:177] * Starting control plane node kindnet-676000 in cluster kindnet-676000
	I1207 12:24:11.676551    4335 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 12:24:11.676565    4335 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1207 12:24:11.676573    4335 cache.go:56] Caching tarball of preloaded images
	I1207 12:24:11.676626    4335 preload.go:174] Found /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 12:24:11.676632    4335 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1207 12:24:11.676701    4335 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/kindnet-676000/config.json ...
	I1207 12:24:11.676712    4335 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/kindnet-676000/config.json: {Name:mk5e4bf997762104791c3beca90244b49ca243d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:24:11.676926    4335 start.go:365] acquiring machines lock for kindnet-676000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:24:11.676958    4335 start.go:369] acquired machines lock for "kindnet-676000" in 26.292µs
	I1207 12:24:11.676970    4335 start.go:93] Provisioning new machine with config: &{Name:kindnet-676000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-676000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:24:11.677005    4335 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:24:11.684559    4335 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1207 12:24:11.701385    4335 start.go:159] libmachine.API.Create for "kindnet-676000" (driver="qemu2")
	I1207 12:24:11.701411    4335 client.go:168] LocalClient.Create starting
	I1207 12:24:11.701470    4335 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:24:11.701505    4335 main.go:141] libmachine: Decoding PEM data...
	I1207 12:24:11.701521    4335 main.go:141] libmachine: Parsing certificate...
	I1207 12:24:11.701560    4335 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:24:11.701582    4335 main.go:141] libmachine: Decoding PEM data...
	I1207 12:24:11.701588    4335 main.go:141] libmachine: Parsing certificate...
	I1207 12:24:11.701952    4335 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:24:11.827931    4335 main.go:141] libmachine: Creating SSH key...
	I1207 12:24:11.933292    4335 main.go:141] libmachine: Creating Disk image...
	I1207 12:24:11.933298    4335 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:24:11.933472    4335 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kindnet-676000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kindnet-676000/disk.qcow2
	I1207 12:24:11.945691    4335 main.go:141] libmachine: STDOUT: 
	I1207 12:24:11.945710    4335 main.go:141] libmachine: STDERR: 
	I1207 12:24:11.945768    4335 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kindnet-676000/disk.qcow2 +20000M
	I1207 12:24:11.956309    4335 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:24:11.956335    4335 main.go:141] libmachine: STDERR: 
	I1207 12:24:11.956363    4335 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kindnet-676000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kindnet-676000/disk.qcow2
	I1207 12:24:11.956369    4335 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:24:11.956402    4335 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kindnet-676000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kindnet-676000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kindnet-676000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:fb:4d:1c:af:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kindnet-676000/disk.qcow2
	I1207 12:24:11.958044    4335 main.go:141] libmachine: STDOUT: 
	I1207 12:24:11.958057    4335 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:24:11.958077    4335 client.go:171] LocalClient.Create took 256.664458ms
	I1207 12:24:13.960229    4335 start.go:128] duration metric: createHost completed in 2.28323825s
	I1207 12:24:13.960291    4335 start.go:83] releasing machines lock for "kindnet-676000", held for 2.283366916s
	W1207 12:24:13.960354    4335 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:24:13.973707    4335 out.go:177] * Deleting "kindnet-676000" in qemu2 ...
	W1207 12:24:13.996256    4335 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:24:13.996282    4335 start.go:709] Will try again in 5 seconds ...
	I1207 12:24:18.998387    4335 start.go:365] acquiring machines lock for kindnet-676000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:24:18.998741    4335 start.go:369] acquired machines lock for "kindnet-676000" in 269.875µs
	I1207 12:24:18.998852    4335 start.go:93] Provisioning new machine with config: &{Name:kindnet-676000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-676000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:24:18.999144    4335 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:24:19.005942    4335 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1207 12:24:19.055118    4335 start.go:159] libmachine.API.Create for "kindnet-676000" (driver="qemu2")
	I1207 12:24:19.055159    4335 client.go:168] LocalClient.Create starting
	I1207 12:24:19.055345    4335 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:24:19.055427    4335 main.go:141] libmachine: Decoding PEM data...
	I1207 12:24:19.055443    4335 main.go:141] libmachine: Parsing certificate...
	I1207 12:24:19.055498    4335 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:24:19.055541    4335 main.go:141] libmachine: Decoding PEM data...
	I1207 12:24:19.055557    4335 main.go:141] libmachine: Parsing certificate...
	I1207 12:24:19.056078    4335 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:24:19.193111    4335 main.go:141] libmachine: Creating SSH key...
	I1207 12:24:19.440455    4335 main.go:141] libmachine: Creating Disk image...
	I1207 12:24:19.440466    4335 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:24:19.440695    4335 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kindnet-676000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kindnet-676000/disk.qcow2
	I1207 12:24:19.453081    4335 main.go:141] libmachine: STDOUT: 
	I1207 12:24:19.453100    4335 main.go:141] libmachine: STDERR: 
	I1207 12:24:19.453176    4335 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kindnet-676000/disk.qcow2 +20000M
	I1207 12:24:19.463663    4335 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:24:19.463690    4335 main.go:141] libmachine: STDERR: 
	I1207 12:24:19.463706    4335 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kindnet-676000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kindnet-676000/disk.qcow2
	I1207 12:24:19.463712    4335 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:24:19.463754    4335 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kindnet-676000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kindnet-676000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kindnet-676000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:b5:6d:1d:8f:68 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kindnet-676000/disk.qcow2
	I1207 12:24:19.465496    4335 main.go:141] libmachine: STDOUT: 
	I1207 12:24:19.465513    4335 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:24:19.465527    4335 client.go:171] LocalClient.Create took 410.368125ms
	I1207 12:24:21.467695    4335 start.go:128] duration metric: createHost completed in 2.46852875s
	I1207 12:24:21.467874    4335 start.go:83] releasing machines lock for "kindnet-676000", held for 2.469047625s
	W1207 12:24:21.468432    4335 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-676000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-676000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:24:21.478026    4335 out.go:177] 
	W1207 12:24:21.483917    4335 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:24:21.483977    4335 out.go:239] * 
	* 
	W1207 12:24:21.486523    4335 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:24:21.496961    4335 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-676000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-676000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.7529555s)

                                                
                                                
-- stdout --
	* [flannel-676000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node flannel-676000 in cluster flannel-676000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-676000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:24:23.827986    4461 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:24:23.828144    4461 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:24:23.828147    4461 out.go:309] Setting ErrFile to fd 2...
	I1207 12:24:23.828149    4461 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:24:23.828270    4461 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:24:23.829242    4461 out.go:303] Setting JSON to false
	I1207 12:24:23.844934    4461 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3234,"bootTime":1701977429,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:24:23.845027    4461 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:24:23.851264    4461 out.go:177] * [flannel-676000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:24:23.859213    4461 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:24:23.859272    4461 notify.go:220] Checking for updates...
	I1207 12:24:23.863039    4461 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:24:23.866207    4461 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:24:23.869141    4461 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:24:23.872174    4461 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:24:23.875136    4461 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:24:23.878460    4461 config.go:182] Loaded profile config "multinode-554000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:24:23.878508    4461 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:24:23.883169    4461 out.go:177] * Using the qemu2 driver based on user configuration
	I1207 12:24:23.890200    4461 start.go:298] selected driver: qemu2
	I1207 12:24:23.890207    4461 start.go:902] validating driver "qemu2" against <nil>
	I1207 12:24:23.890213    4461 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:24:23.892607    4461 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 12:24:23.896122    4461 out.go:177] * Automatically selected the socket_vmnet network
	I1207 12:24:23.899296    4461 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 12:24:23.899336    4461 cni.go:84] Creating CNI manager for "flannel"
	I1207 12:24:23.899340    4461 start_flags.go:318] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1207 12:24:23.899344    4461 start_flags.go:323] config:
	{Name:flannel-676000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:flannel-676000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: S
SHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:24:23.903539    4461 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:24:23.910999    4461 out.go:177] * Starting control plane node flannel-676000 in cluster flannel-676000
	I1207 12:24:23.915146    4461 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 12:24:23.915163    4461 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1207 12:24:23.915172    4461 cache.go:56] Caching tarball of preloaded images
	I1207 12:24:23.915234    4461 preload.go:174] Found /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 12:24:23.915241    4461 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1207 12:24:23.915303    4461 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/flannel-676000/config.json ...
	I1207 12:24:23.915315    4461 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/flannel-676000/config.json: {Name:mk0f3924b6800a03e4042a77643412cc110fffce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:24:23.915523    4461 start.go:365] acquiring machines lock for flannel-676000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:24:23.915556    4461 start.go:369] acquired machines lock for "flannel-676000" in 27.625µs
	I1207 12:24:23.915567    4461 start.go:93] Provisioning new machine with config: &{Name:flannel-676000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:flannel-676000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:24:23.915595    4461 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:24:23.923118    4461 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1207 12:24:23.939694    4461 start.go:159] libmachine.API.Create for "flannel-676000" (driver="qemu2")
	I1207 12:24:23.939725    4461 client.go:168] LocalClient.Create starting
	I1207 12:24:23.939791    4461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:24:23.939818    4461 main.go:141] libmachine: Decoding PEM data...
	I1207 12:24:23.939830    4461 main.go:141] libmachine: Parsing certificate...
	I1207 12:24:23.939866    4461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:24:23.939886    4461 main.go:141] libmachine: Decoding PEM data...
	I1207 12:24:23.939893    4461 main.go:141] libmachine: Parsing certificate...
	I1207 12:24:23.940230    4461 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:24:24.066037    4461 main.go:141] libmachine: Creating SSH key...
	I1207 12:24:24.154661    4461 main.go:141] libmachine: Creating Disk image...
	I1207 12:24:24.154667    4461 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:24:24.154841    4461 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/flannel-676000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/flannel-676000/disk.qcow2
	I1207 12:24:24.166881    4461 main.go:141] libmachine: STDOUT: 
	I1207 12:24:24.166900    4461 main.go:141] libmachine: STDERR: 
	I1207 12:24:24.166966    4461 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/flannel-676000/disk.qcow2 +20000M
	I1207 12:24:24.177427    4461 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:24:24.177442    4461 main.go:141] libmachine: STDERR: 
	I1207 12:24:24.177470    4461 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/flannel-676000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/flannel-676000/disk.qcow2
	I1207 12:24:24.177480    4461 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:24:24.177509    4461 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/flannel-676000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/flannel-676000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/flannel-676000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:38:22:cf:80:80 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/flannel-676000/disk.qcow2
	I1207 12:24:24.179140    4461 main.go:141] libmachine: STDOUT: 
	I1207 12:24:24.179152    4461 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:24:24.179172    4461 client.go:171] LocalClient.Create took 239.4455ms
	I1207 12:24:26.181353    4461 start.go:128] duration metric: createHost completed in 2.265768208s
	I1207 12:24:26.181443    4461 start.go:83] releasing machines lock for "flannel-676000", held for 2.265920041s
	W1207 12:24:26.181492    4461 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:24:26.188799    4461 out.go:177] * Deleting "flannel-676000" in qemu2 ...
	W1207 12:24:26.217024    4461 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:24:26.217054    4461 start.go:709] Will try again in 5 seconds ...
	I1207 12:24:31.219178    4461 start.go:365] acquiring machines lock for flannel-676000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:24:31.219645    4461 start.go:369] acquired machines lock for "flannel-676000" in 303.708µs
	I1207 12:24:31.219752    4461 start.go:93] Provisioning new machine with config: &{Name:flannel-676000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:flannel-676000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:24:31.220024    4461 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:24:31.229504    4461 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1207 12:24:31.278337    4461 start.go:159] libmachine.API.Create for "flannel-676000" (driver="qemu2")
	I1207 12:24:31.278385    4461 client.go:168] LocalClient.Create starting
	I1207 12:24:31.278507    4461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:24:31.278578    4461 main.go:141] libmachine: Decoding PEM data...
	I1207 12:24:31.278596    4461 main.go:141] libmachine: Parsing certificate...
	I1207 12:24:31.278658    4461 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:24:31.278699    4461 main.go:141] libmachine: Decoding PEM data...
	I1207 12:24:31.278710    4461 main.go:141] libmachine: Parsing certificate...
	I1207 12:24:31.279223    4461 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:24:31.416681    4461 main.go:141] libmachine: Creating SSH key...
	I1207 12:24:31.480133    4461 main.go:141] libmachine: Creating Disk image...
	I1207 12:24:31.480139    4461 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:24:31.480312    4461 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/flannel-676000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/flannel-676000/disk.qcow2
	I1207 12:24:31.492545    4461 main.go:141] libmachine: STDOUT: 
	I1207 12:24:31.492566    4461 main.go:141] libmachine: STDERR: 
	I1207 12:24:31.492617    4461 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/flannel-676000/disk.qcow2 +20000M
	I1207 12:24:31.503127    4461 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:24:31.503142    4461 main.go:141] libmachine: STDERR: 
	I1207 12:24:31.503160    4461 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/flannel-676000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/flannel-676000/disk.qcow2
	I1207 12:24:31.503165    4461 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:24:31.503197    4461 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/flannel-676000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/flannel-676000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/flannel-676000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:8b:66:27:f2:c5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/flannel-676000/disk.qcow2
	I1207 12:24:31.504821    4461 main.go:141] libmachine: STDOUT: 
	I1207 12:24:31.504836    4461 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:24:31.504847    4461 client.go:171] LocalClient.Create took 226.460166ms
	I1207 12:24:33.506987    4461 start.go:128] duration metric: createHost completed in 2.286977417s
	I1207 12:24:33.507055    4461 start.go:83] releasing machines lock for "flannel-676000", held for 2.287430208s
	W1207 12:24:33.507443    4461 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-676000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-676000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:24:33.518616    4461 out.go:177] 
	W1207 12:24:33.523403    4461 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:24:33.523447    4461 out.go:239] * 
	* 
	W1207 12:24:33.525984    4461 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:24:33.535992    4461 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-676000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-676000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.838505667s)

                                                
                                                
-- stdout --
	* [enable-default-cni-676000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node enable-default-cni-676000 in cluster enable-default-cni-676000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-676000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:24:36.008202    4591 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:24:36.008376    4591 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:24:36.008379    4591 out.go:309] Setting ErrFile to fd 2...
	I1207 12:24:36.008382    4591 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:24:36.008494    4591 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:24:36.009500    4591 out.go:303] Setting JSON to false
	I1207 12:24:36.025370    4591 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3247,"bootTime":1701977429,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:24:36.025461    4591 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:24:36.031928    4591 out.go:177] * [enable-default-cni-676000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:24:36.039885    4591 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:24:36.043863    4591 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:24:36.039977    4591 notify.go:220] Checking for updates...
	I1207 12:24:36.049856    4591 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:24:36.052865    4591 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:24:36.055861    4591 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:24:36.058844    4591 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:24:36.062209    4591 config.go:182] Loaded profile config "multinode-554000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:24:36.062262    4591 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:24:36.066801    4591 out.go:177] * Using the qemu2 driver based on user configuration
	I1207 12:24:36.074032    4591 start.go:298] selected driver: qemu2
	I1207 12:24:36.074041    4591 start.go:902] validating driver "qemu2" against <nil>
	I1207 12:24:36.074048    4591 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:24:36.076391    4591 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 12:24:36.079836    4591 out.go:177] * Automatically selected the socket_vmnet network
	E1207 12:24:36.082917    4591 start_flags.go:465] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1207 12:24:36.082932    4591 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 12:24:36.082982    4591 cni.go:84] Creating CNI manager for "bridge"
	I1207 12:24:36.082988    4591 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1207 12:24:36.082994    4591 start_flags.go:323] config:
	{Name:enable-default-cni-676000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-676000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Sta
ticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:24:36.087652    4591 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:24:36.094845    4591 out.go:177] * Starting control plane node enable-default-cni-676000 in cluster enable-default-cni-676000
	I1207 12:24:36.098835    4591 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 12:24:36.098849    4591 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1207 12:24:36.098857    4591 cache.go:56] Caching tarball of preloaded images
	I1207 12:24:36.098911    4591 preload.go:174] Found /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 12:24:36.098917    4591 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1207 12:24:36.098977    4591 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/enable-default-cni-676000/config.json ...
	I1207 12:24:36.098988    4591 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/enable-default-cni-676000/config.json: {Name:mk75f3486461577972c871138964974b40fc449c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:24:36.099201    4591 start.go:365] acquiring machines lock for enable-default-cni-676000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:24:36.099237    4591 start.go:369] acquired machines lock for "enable-default-cni-676000" in 27.166µs
	I1207 12:24:36.099249    4591 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-676000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-676000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:24:36.099276    4591 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:24:36.107847    4591 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1207 12:24:36.125543    4591 start.go:159] libmachine.API.Create for "enable-default-cni-676000" (driver="qemu2")
	I1207 12:24:36.125575    4591 client.go:168] LocalClient.Create starting
	I1207 12:24:36.125633    4591 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:24:36.125666    4591 main.go:141] libmachine: Decoding PEM data...
	I1207 12:24:36.125676    4591 main.go:141] libmachine: Parsing certificate...
	I1207 12:24:36.125712    4591 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:24:36.125735    4591 main.go:141] libmachine: Decoding PEM data...
	I1207 12:24:36.125743    4591 main.go:141] libmachine: Parsing certificate...
	I1207 12:24:36.126178    4591 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:24:36.251689    4591 main.go:141] libmachine: Creating SSH key...
	I1207 12:24:36.369327    4591 main.go:141] libmachine: Creating Disk image...
	I1207 12:24:36.369333    4591 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:24:36.369518    4591 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/enable-default-cni-676000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/enable-default-cni-676000/disk.qcow2
	I1207 12:24:36.381764    4591 main.go:141] libmachine: STDOUT: 
	I1207 12:24:36.381782    4591 main.go:141] libmachine: STDERR: 
	I1207 12:24:36.381845    4591 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/enable-default-cni-676000/disk.qcow2 +20000M
	I1207 12:24:36.392413    4591 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:24:36.392434    4591 main.go:141] libmachine: STDERR: 
	I1207 12:24:36.392452    4591 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/enable-default-cni-676000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/enable-default-cni-676000/disk.qcow2
	I1207 12:24:36.392459    4591 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:24:36.392488    4591 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/enable-default-cni-676000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/enable-default-cni-676000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/enable-default-cni-676000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:0d:e6:4e:29:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/enable-default-cni-676000/disk.qcow2
	I1207 12:24:36.394168    4591 main.go:141] libmachine: STDOUT: 
	I1207 12:24:36.394183    4591 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:24:36.394203    4591 client.go:171] LocalClient.Create took 268.625875ms
	I1207 12:24:38.396410    4591 start.go:128] duration metric: createHost completed in 2.297159625s
	I1207 12:24:38.396460    4591 start.go:83] releasing machines lock for "enable-default-cni-676000", held for 2.297255791s
	W1207 12:24:38.396510    4591 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:24:38.409714    4591 out.go:177] * Deleting "enable-default-cni-676000" in qemu2 ...
	W1207 12:24:38.433715    4591 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:24:38.433749    4591 start.go:709] Will try again in 5 seconds ...
	I1207 12:24:43.435929    4591 start.go:365] acquiring machines lock for enable-default-cni-676000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:24:43.436403    4591 start.go:369] acquired machines lock for "enable-default-cni-676000" in 368.416µs
	I1207 12:24:43.436537    4591 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-676000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-676000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:24:43.436826    4591 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:24:43.446400    4591 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1207 12:24:43.497519    4591 start.go:159] libmachine.API.Create for "enable-default-cni-676000" (driver="qemu2")
	I1207 12:24:43.497571    4591 client.go:168] LocalClient.Create starting
	I1207 12:24:43.497676    4591 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:24:43.497742    4591 main.go:141] libmachine: Decoding PEM data...
	I1207 12:24:43.497761    4591 main.go:141] libmachine: Parsing certificate...
	I1207 12:24:43.497822    4591 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:24:43.497867    4591 main.go:141] libmachine: Decoding PEM data...
	I1207 12:24:43.497880    4591 main.go:141] libmachine: Parsing certificate...
	I1207 12:24:43.498370    4591 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:24:43.637173    4591 main.go:141] libmachine: Creating SSH key...
	I1207 12:24:43.745123    4591 main.go:141] libmachine: Creating Disk image...
	I1207 12:24:43.745137    4591 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:24:43.745335    4591 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/enable-default-cni-676000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/enable-default-cni-676000/disk.qcow2
	I1207 12:24:43.757448    4591 main.go:141] libmachine: STDOUT: 
	I1207 12:24:43.757471    4591 main.go:141] libmachine: STDERR: 
	I1207 12:24:43.757529    4591 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/enable-default-cni-676000/disk.qcow2 +20000M
	I1207 12:24:43.768062    4591 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:24:43.768077    4591 main.go:141] libmachine: STDERR: 
	I1207 12:24:43.768091    4591 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/enable-default-cni-676000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/enable-default-cni-676000/disk.qcow2
	I1207 12:24:43.768101    4591 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:24:43.768146    4591 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/enable-default-cni-676000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/enable-default-cni-676000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/enable-default-cni-676000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:a2:57:47:11:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/enable-default-cni-676000/disk.qcow2
	I1207 12:24:43.769885    4591 main.go:141] libmachine: STDOUT: 
	I1207 12:24:43.769900    4591 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:24:43.769912    4591 client.go:171] LocalClient.Create took 272.337583ms
	I1207 12:24:45.772051    4591 start.go:128] duration metric: createHost completed in 2.335239792s
	I1207 12:24:45.772108    4591 start.go:83] releasing machines lock for "enable-default-cni-676000", held for 2.335723958s
	W1207 12:24:45.772498    4591 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-676000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-676000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:24:45.782278    4591 out.go:177] 
	W1207 12:24:45.789528    4591 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:24:45.789566    4591 out.go:239] * 
	* 
	W1207 12:24:45.792085    4591 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:24:45.801298    4591 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-676000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-676000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.815947333s)

                                                
                                                
-- stdout --
	* [bridge-676000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node bridge-676000 in cluster bridge-676000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-676000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:24:48.081432    4708 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:24:48.081592    4708 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:24:48.081594    4708 out.go:309] Setting ErrFile to fd 2...
	I1207 12:24:48.081597    4708 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:24:48.081737    4708 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:24:48.082726    4708 out.go:303] Setting JSON to false
	I1207 12:24:48.098694    4708 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3259,"bootTime":1701977429,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:24:48.098788    4708 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:24:48.104906    4708 out.go:177] * [bridge-676000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:24:48.116808    4708 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:24:48.112905    4708 notify.go:220] Checking for updates...
	I1207 12:24:48.122841    4708 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:24:48.126854    4708 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:24:48.129860    4708 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:24:48.132862    4708 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:24:48.135854    4708 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:24:48.139227    4708 config.go:182] Loaded profile config "multinode-554000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:24:48.139273    4708 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:24:48.143808    4708 out.go:177] * Using the qemu2 driver based on user configuration
	I1207 12:24:48.150881    4708 start.go:298] selected driver: qemu2
	I1207 12:24:48.150890    4708 start.go:902] validating driver "qemu2" against <nil>
	I1207 12:24:48.150897    4708 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:24:48.153277    4708 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 12:24:48.155857    4708 out.go:177] * Automatically selected the socket_vmnet network
	I1207 12:24:48.158920    4708 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 12:24:48.158950    4708 cni.go:84] Creating CNI manager for "bridge"
	I1207 12:24:48.158954    4708 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1207 12:24:48.158960    4708 start_flags.go:323] config:
	{Name:bridge-676000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:bridge-676000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:24:48.163524    4708 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:24:48.170867    4708 out.go:177] * Starting control plane node bridge-676000 in cluster bridge-676000
	I1207 12:24:48.174872    4708 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 12:24:48.174888    4708 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1207 12:24:48.174899    4708 cache.go:56] Caching tarball of preloaded images
	I1207 12:24:48.174959    4708 preload.go:174] Found /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 12:24:48.174966    4708 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1207 12:24:48.175041    4708 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/bridge-676000/config.json ...
	I1207 12:24:48.175058    4708 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/bridge-676000/config.json: {Name:mk6f5c4d2e1d50a6275ca7f332a0279e8de4ec64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:24:48.175266    4708 start.go:365] acquiring machines lock for bridge-676000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:24:48.175298    4708 start.go:369] acquired machines lock for "bridge-676000" in 26.208µs
	I1207 12:24:48.175309    4708 start.go:93] Provisioning new machine with config: &{Name:bridge-676000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:bridge-676000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:24:48.175337    4708 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:24:48.183870    4708 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1207 12:24:48.201979    4708 start.go:159] libmachine.API.Create for "bridge-676000" (driver="qemu2")
	I1207 12:24:48.202012    4708 client.go:168] LocalClient.Create starting
	I1207 12:24:48.202084    4708 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:24:48.202115    4708 main.go:141] libmachine: Decoding PEM data...
	I1207 12:24:48.202126    4708 main.go:141] libmachine: Parsing certificate...
	I1207 12:24:48.202164    4708 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:24:48.202186    4708 main.go:141] libmachine: Decoding PEM data...
	I1207 12:24:48.202193    4708 main.go:141] libmachine: Parsing certificate...
	I1207 12:24:48.202606    4708 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:24:48.331000    4708 main.go:141] libmachine: Creating SSH key...
	I1207 12:24:48.432856    4708 main.go:141] libmachine: Creating Disk image...
	I1207 12:24:48.432862    4708 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:24:48.433066    4708 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/bridge-676000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/bridge-676000/disk.qcow2
	I1207 12:24:48.445395    4708 main.go:141] libmachine: STDOUT: 
	I1207 12:24:48.445413    4708 main.go:141] libmachine: STDERR: 
	I1207 12:24:48.445468    4708 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/bridge-676000/disk.qcow2 +20000M
	I1207 12:24:48.455950    4708 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:24:48.455965    4708 main.go:141] libmachine: STDERR: 
	I1207 12:24:48.455982    4708 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/bridge-676000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/bridge-676000/disk.qcow2
	I1207 12:24:48.455988    4708 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:24:48.456025    4708 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/bridge-676000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/bridge-676000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/bridge-676000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:1c:2c:5c:3c:71 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/bridge-676000/disk.qcow2
	I1207 12:24:48.457681    4708 main.go:141] libmachine: STDOUT: 
	I1207 12:24:48.457696    4708 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:24:48.457716    4708 client.go:171] LocalClient.Create took 255.701792ms
	I1207 12:24:50.459857    4708 start.go:128] duration metric: createHost completed in 2.284540375s
	I1207 12:24:50.459908    4708 start.go:83] releasing machines lock for "bridge-676000", held for 2.284643625s
	W1207 12:24:50.459976    4708 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:24:50.467737    4708 out.go:177] * Deleting "bridge-676000" in qemu2 ...
	W1207 12:24:50.493786    4708 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:24:50.493873    4708 start.go:709] Will try again in 5 seconds ...
	I1207 12:24:55.495990    4708 start.go:365] acquiring machines lock for bridge-676000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:24:55.496489    4708 start.go:369] acquired machines lock for "bridge-676000" in 366.25µs
	I1207 12:24:55.496647    4708 start.go:93] Provisioning new machine with config: &{Name:bridge-676000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:bridge-676000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:24:55.496899    4708 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:24:55.507485    4708 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1207 12:24:55.556858    4708 start.go:159] libmachine.API.Create for "bridge-676000" (driver="qemu2")
	I1207 12:24:55.556919    4708 client.go:168] LocalClient.Create starting
	I1207 12:24:55.557072    4708 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:24:55.557156    4708 main.go:141] libmachine: Decoding PEM data...
	I1207 12:24:55.557176    4708 main.go:141] libmachine: Parsing certificate...
	I1207 12:24:55.557271    4708 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:24:55.557331    4708 main.go:141] libmachine: Decoding PEM data...
	I1207 12:24:55.557346    4708 main.go:141] libmachine: Parsing certificate...
	I1207 12:24:55.557960    4708 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:24:55.695920    4708 main.go:141] libmachine: Creating SSH key...
	I1207 12:24:55.796582    4708 main.go:141] libmachine: Creating Disk image...
	I1207 12:24:55.796591    4708 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:24:55.796789    4708 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/bridge-676000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/bridge-676000/disk.qcow2
	I1207 12:24:55.809015    4708 main.go:141] libmachine: STDOUT: 
	I1207 12:24:55.809042    4708 main.go:141] libmachine: STDERR: 
	I1207 12:24:55.809100    4708 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/bridge-676000/disk.qcow2 +20000M
	I1207 12:24:55.819489    4708 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:24:55.819513    4708 main.go:141] libmachine: STDERR: 
	I1207 12:24:55.819528    4708 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/bridge-676000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/bridge-676000/disk.qcow2
	I1207 12:24:55.819540    4708 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:24:55.819581    4708 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/bridge-676000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/bridge-676000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/bridge-676000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:e6:f9:d7:fa:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/bridge-676000/disk.qcow2
	I1207 12:24:55.821352    4708 main.go:141] libmachine: STDOUT: 
	I1207 12:24:55.821368    4708 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:24:55.821381    4708 client.go:171] LocalClient.Create took 264.461958ms
	I1207 12:24:57.823513    4708 start.go:128] duration metric: createHost completed in 2.326614s
	I1207 12:24:57.823567    4708 start.go:83] releasing machines lock for "bridge-676000", held for 2.327078208s
	W1207 12:24:57.823995    4708 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-676000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-676000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:24:57.836558    4708 out.go:177] 
	W1207 12:24:57.839615    4708 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:24:57.839673    4708 out.go:239] * 
	* 
	W1207 12:24:57.842427    4708 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:24:57.853598    4708 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-676000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-676000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.795803375s)

                                                
                                                
-- stdout --
	* [kubenet-676000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubenet-676000 in cluster kubenet-676000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-676000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:25:00.127151    4825 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:25:00.127308    4825 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:25:00.127311    4825 out.go:309] Setting ErrFile to fd 2...
	I1207 12:25:00.127313    4825 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:25:00.127432    4825 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:25:00.128433    4825 out.go:303] Setting JSON to false
	I1207 12:25:00.144343    4825 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3271,"bootTime":1701977429,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:25:00.144412    4825 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:25:00.151112    4825 out.go:177] * [kubenet-676000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:25:00.160045    4825 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:25:00.164041    4825 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:25:00.160074    4825 notify.go:220] Checking for updates...
	I1207 12:25:00.165510    4825 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:25:00.169042    4825 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:25:00.172094    4825 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:25:00.175088    4825 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:25:00.178327    4825 config.go:182] Loaded profile config "multinode-554000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:25:00.178375    4825 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:25:00.183063    4825 out.go:177] * Using the qemu2 driver based on user configuration
	I1207 12:25:00.190041    4825 start.go:298] selected driver: qemu2
	I1207 12:25:00.190050    4825 start.go:902] validating driver "qemu2" against <nil>
	I1207 12:25:00.190058    4825 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:25:00.192384    4825 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 12:25:00.196071    4825 out.go:177] * Automatically selected the socket_vmnet network
	I1207 12:25:00.199172    4825 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 12:25:00.199229    4825 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1207 12:25:00.199235    4825 start_flags.go:323] config:
	{Name:kubenet-676000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-676000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:25:00.203641    4825 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:25:00.211126    4825 out.go:177] * Starting control plane node kubenet-676000 in cluster kubenet-676000
	I1207 12:25:00.214043    4825 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 12:25:00.214060    4825 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1207 12:25:00.214071    4825 cache.go:56] Caching tarball of preloaded images
	I1207 12:25:00.214137    4825 preload.go:174] Found /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 12:25:00.214144    4825 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1207 12:25:00.214231    4825 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/kubenet-676000/config.json ...
	I1207 12:25:00.214242    4825 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/kubenet-676000/config.json: {Name:mkfa7f9fb013951a84db62b7085c5e9c6912d64d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:25:00.214460    4825 start.go:365] acquiring machines lock for kubenet-676000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:25:00.214493    4825 start.go:369] acquired machines lock for "kubenet-676000" in 26.666µs
	I1207 12:25:00.214505    4825 start.go:93] Provisioning new machine with config: &{Name:kubenet-676000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-676000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:25:00.214537    4825 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:25:00.222031    4825 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1207 12:25:00.239304    4825 start.go:159] libmachine.API.Create for "kubenet-676000" (driver="qemu2")
	I1207 12:25:00.239341    4825 client.go:168] LocalClient.Create starting
	I1207 12:25:00.239415    4825 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:25:00.239445    4825 main.go:141] libmachine: Decoding PEM data...
	I1207 12:25:00.239458    4825 main.go:141] libmachine: Parsing certificate...
	I1207 12:25:00.239499    4825 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:25:00.239525    4825 main.go:141] libmachine: Decoding PEM data...
	I1207 12:25:00.239534    4825 main.go:141] libmachine: Parsing certificate...
	I1207 12:25:00.239917    4825 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:25:00.370719    4825 main.go:141] libmachine: Creating SSH key...
	I1207 12:25:00.504895    4825 main.go:141] libmachine: Creating Disk image...
	I1207 12:25:00.504902    4825 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:25:00.505075    4825 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubenet-676000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubenet-676000/disk.qcow2
	I1207 12:25:00.517376    4825 main.go:141] libmachine: STDOUT: 
	I1207 12:25:00.517409    4825 main.go:141] libmachine: STDERR: 
	I1207 12:25:00.517468    4825 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubenet-676000/disk.qcow2 +20000M
	I1207 12:25:00.527795    4825 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:25:00.527809    4825 main.go:141] libmachine: STDERR: 
	I1207 12:25:00.527839    4825 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubenet-676000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubenet-676000/disk.qcow2
	I1207 12:25:00.527845    4825 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:25:00.527879    4825 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubenet-676000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubenet-676000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubenet-676000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:13:4d:d5:08:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubenet-676000/disk.qcow2
	I1207 12:25:00.529523    4825 main.go:141] libmachine: STDOUT: 
	I1207 12:25:00.529537    4825 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:25:00.529558    4825 client.go:171] LocalClient.Create took 290.214375ms
	I1207 12:25:02.531783    4825 start.go:128] duration metric: createHost completed in 2.317272083s
	I1207 12:25:02.531834    4825 start.go:83] releasing machines lock for "kubenet-676000", held for 2.317374875s
	W1207 12:25:02.531906    4825 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:25:02.541853    4825 out.go:177] * Deleting "kubenet-676000" in qemu2 ...
	W1207 12:25:02.566549    4825 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:25:02.566574    4825 start.go:709] Will try again in 5 seconds ...
	I1207 12:25:07.568692    4825 start.go:365] acquiring machines lock for kubenet-676000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:25:07.569076    4825 start.go:369] acquired machines lock for "kubenet-676000" in 301.083µs
	I1207 12:25:07.569197    4825 start.go:93] Provisioning new machine with config: &{Name:kubenet-676000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-676000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:25:07.569517    4825 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:25:07.578151    4825 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1207 12:25:07.625719    4825 start.go:159] libmachine.API.Create for "kubenet-676000" (driver="qemu2")
	I1207 12:25:07.625771    4825 client.go:168] LocalClient.Create starting
	I1207 12:25:07.625885    4825 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:25:07.625947    4825 main.go:141] libmachine: Decoding PEM data...
	I1207 12:25:07.625964    4825 main.go:141] libmachine: Parsing certificate...
	I1207 12:25:07.626032    4825 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:25:07.626072    4825 main.go:141] libmachine: Decoding PEM data...
	I1207 12:25:07.626086    4825 main.go:141] libmachine: Parsing certificate...
	I1207 12:25:07.626540    4825 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:25:07.765373    4825 main.go:141] libmachine: Creating SSH key...
	I1207 12:25:07.824886    4825 main.go:141] libmachine: Creating Disk image...
	I1207 12:25:07.824895    4825 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:25:07.825110    4825 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubenet-676000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubenet-676000/disk.qcow2
	I1207 12:25:07.837168    4825 main.go:141] libmachine: STDOUT: 
	I1207 12:25:07.837191    4825 main.go:141] libmachine: STDERR: 
	I1207 12:25:07.837253    4825 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubenet-676000/disk.qcow2 +20000M
	I1207 12:25:07.847863    4825 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:25:07.847896    4825 main.go:141] libmachine: STDERR: 
	I1207 12:25:07.847917    4825 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubenet-676000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubenet-676000/disk.qcow2
	I1207 12:25:07.847927    4825 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:25:07.847980    4825 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubenet-676000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubenet-676000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubenet-676000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:47:0f:99:9a:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/kubenet-676000/disk.qcow2
	I1207 12:25:07.849656    4825 main.go:141] libmachine: STDOUT: 
	I1207 12:25:07.849684    4825 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:25:07.849698    4825 client.go:171] LocalClient.Create took 223.923583ms
	I1207 12:25:09.851952    4825 start.go:128] duration metric: createHost completed in 2.282401959s
	I1207 12:25:09.852017    4825 start.go:83] releasing machines lock for "kubenet-676000", held for 2.28295975s
	W1207 12:25:09.852450    4825 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-676000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-676000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:25:09.860890    4825 out.go:177] 
	W1207 12:25:09.866010    4825 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:25:09.866065    4825 out.go:239] * 
	* 
	W1207 12:25:09.868657    4825 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:25:09.877970    4825 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-676000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-676000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.930078209s)

                                                
                                                
-- stdout --
	* [custom-flannel-676000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node custom-flannel-676000 in cluster custom-flannel-676000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-676000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:25:12.157735    4939 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:25:12.157885    4939 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:25:12.157888    4939 out.go:309] Setting ErrFile to fd 2...
	I1207 12:25:12.157891    4939 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:25:12.158007    4939 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:25:12.159060    4939 out.go:303] Setting JSON to false
	I1207 12:25:12.174825    4939 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3283,"bootTime":1701977429,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:25:12.174924    4939 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:25:12.181090    4939 out.go:177] * [custom-flannel-676000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:25:12.188976    4939 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:25:12.193102    4939 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:25:12.189038    4939 notify.go:220] Checking for updates...
	I1207 12:25:12.197654    4939 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:25:12.201107    4939 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:25:12.204124    4939 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:25:12.207132    4939 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:25:12.210512    4939 config.go:182] Loaded profile config "multinode-554000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:25:12.210548    4939 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:25:12.215109    4939 out.go:177] * Using the qemu2 driver based on user configuration
	I1207 12:25:12.222011    4939 start.go:298] selected driver: qemu2
	I1207 12:25:12.222018    4939 start.go:902] validating driver "qemu2" against <nil>
	I1207 12:25:12.222027    4939 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:25:12.224426    4939 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 12:25:12.227132    4939 out.go:177] * Automatically selected the socket_vmnet network
	I1207 12:25:12.230199    4939 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 12:25:12.230238    4939 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1207 12:25:12.230246    4939 start_flags.go:318] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1207 12:25:12.230251    4939 start_flags.go:323] config:
	{Name:custom-flannel-676000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-676000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/sock
et_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:25:12.234715    4939 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:25:12.242126    4939 out.go:177] * Starting control plane node custom-flannel-676000 in cluster custom-flannel-676000
	I1207 12:25:12.245102    4939 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 12:25:12.245117    4939 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1207 12:25:12.245126    4939 cache.go:56] Caching tarball of preloaded images
	I1207 12:25:12.245185    4939 preload.go:174] Found /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 12:25:12.245200    4939 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1207 12:25:12.245263    4939 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/custom-flannel-676000/config.json ...
	I1207 12:25:12.245275    4939 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/custom-flannel-676000/config.json: {Name:mkb0e1d2487b9b5ae6f14c0bec8237a4bfc99f00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:25:12.245493    4939 start.go:365] acquiring machines lock for custom-flannel-676000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:25:12.245526    4939 start.go:369] acquired machines lock for "custom-flannel-676000" in 26.958µs
	I1207 12:25:12.245538    4939 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-676000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-676000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:25:12.245568    4939 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:25:12.252965    4939 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1207 12:25:12.269556    4939 start.go:159] libmachine.API.Create for "custom-flannel-676000" (driver="qemu2")
	I1207 12:25:12.269586    4939 client.go:168] LocalClient.Create starting
	I1207 12:25:12.269643    4939 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:25:12.269677    4939 main.go:141] libmachine: Decoding PEM data...
	I1207 12:25:12.269689    4939 main.go:141] libmachine: Parsing certificate...
	I1207 12:25:12.269726    4939 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:25:12.269748    4939 main.go:141] libmachine: Decoding PEM data...
	I1207 12:25:12.269755    4939 main.go:141] libmachine: Parsing certificate...
	I1207 12:25:12.270126    4939 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:25:12.394830    4939 main.go:141] libmachine: Creating SSH key...
	I1207 12:25:12.536895    4939 main.go:141] libmachine: Creating Disk image...
	I1207 12:25:12.536906    4939 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:25:12.537109    4939 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/custom-flannel-676000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/custom-flannel-676000/disk.qcow2
	I1207 12:25:12.549717    4939 main.go:141] libmachine: STDOUT: 
	I1207 12:25:12.549740    4939 main.go:141] libmachine: STDERR: 
	I1207 12:25:12.549795    4939 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/custom-flannel-676000/disk.qcow2 +20000M
	I1207 12:25:12.560080    4939 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:25:12.560094    4939 main.go:141] libmachine: STDERR: 
	I1207 12:25:12.560111    4939 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/custom-flannel-676000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/custom-flannel-676000/disk.qcow2
	I1207 12:25:12.560116    4939 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:25:12.560158    4939 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/custom-flannel-676000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/custom-flannel-676000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/custom-flannel-676000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:fc:f4:62:79:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/custom-flannel-676000/disk.qcow2
	I1207 12:25:12.561813    4939 main.go:141] libmachine: STDOUT: 
	I1207 12:25:12.561830    4939 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:25:12.561852    4939 client.go:171] LocalClient.Create took 292.2645ms
	I1207 12:25:14.563993    4939 start.go:128] duration metric: createHost completed in 2.318446333s
	I1207 12:25:14.564031    4939 start.go:83] releasing machines lock for "custom-flannel-676000", held for 2.318539209s
	W1207 12:25:14.564106    4939 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:25:14.574249    4939 out.go:177] * Deleting "custom-flannel-676000" in qemu2 ...
	W1207 12:25:14.599376    4939 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:25:14.599413    4939 start.go:709] Will try again in 5 seconds ...
	I1207 12:25:19.601736    4939 start.go:365] acquiring machines lock for custom-flannel-676000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:25:19.602230    4939 start.go:369] acquired machines lock for "custom-flannel-676000" in 375.792µs
	I1207 12:25:19.602344    4939 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-676000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-676000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:25:19.602629    4939 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:25:19.615063    4939 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1207 12:25:19.664507    4939 start.go:159] libmachine.API.Create for "custom-flannel-676000" (driver="qemu2")
	I1207 12:25:19.664560    4939 client.go:168] LocalClient.Create starting
	I1207 12:25:19.664684    4939 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:25:19.664753    4939 main.go:141] libmachine: Decoding PEM data...
	I1207 12:25:19.664772    4939 main.go:141] libmachine: Parsing certificate...
	I1207 12:25:19.664837    4939 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:25:19.664879    4939 main.go:141] libmachine: Decoding PEM data...
	I1207 12:25:19.664892    4939 main.go:141] libmachine: Parsing certificate...
	I1207 12:25:19.665377    4939 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:25:19.803110    4939 main.go:141] libmachine: Creating SSH key...
	I1207 12:25:19.984720    4939 main.go:141] libmachine: Creating Disk image...
	I1207 12:25:19.984733    4939 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:25:19.984933    4939 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/custom-flannel-676000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/custom-flannel-676000/disk.qcow2
	I1207 12:25:19.997460    4939 main.go:141] libmachine: STDOUT: 
	I1207 12:25:19.997478    4939 main.go:141] libmachine: STDERR: 
	I1207 12:25:19.997556    4939 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/custom-flannel-676000/disk.qcow2 +20000M
	I1207 12:25:20.007953    4939 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:25:20.007991    4939 main.go:141] libmachine: STDERR: 
	I1207 12:25:20.008010    4939 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/custom-flannel-676000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/custom-flannel-676000/disk.qcow2
	I1207 12:25:20.008018    4939 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:25:20.008058    4939 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/custom-flannel-676000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/custom-flannel-676000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/custom-flannel-676000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:4c:8f:f0:33:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/custom-flannel-676000/disk.qcow2
	I1207 12:25:20.009756    4939 main.go:141] libmachine: STDOUT: 
	I1207 12:25:20.009777    4939 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:25:20.009792    4939 client.go:171] LocalClient.Create took 345.233958ms
	I1207 12:25:22.011965    4939 start.go:128] duration metric: createHost completed in 2.40932425s
	I1207 12:25:22.012031    4939 start.go:83] releasing machines lock for "custom-flannel-676000", held for 2.40982325s
	W1207 12:25:22.012519    4939 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-676000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-676000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:25:22.025150    4939 out.go:177] 
	W1207 12:25:22.029143    4939 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:25:22.029184    4939 out.go:239] * 
	* 
	W1207 12:25:22.031744    4939 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:25:22.042136    4939 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-676000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
E1207 12:25:32.459575    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-676000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.793175208s)

                                                
                                                
-- stdout --
	* [calico-676000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node calico-676000 in cluster calico-676000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-676000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:25:24.499428    5075 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:25:24.499591    5075 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:25:24.499607    5075 out.go:309] Setting ErrFile to fd 2...
	I1207 12:25:24.499613    5075 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:25:24.499748    5075 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:25:24.503361    5075 out.go:303] Setting JSON to false
	I1207 12:25:24.519304    5075 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3295,"bootTime":1701977429,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:25:24.519369    5075 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:25:24.523900    5075 out.go:177] * [calico-676000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:25:24.531869    5075 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:25:24.531922    5075 notify.go:220] Checking for updates...
	I1207 12:25:24.534971    5075 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:25:24.538984    5075 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:25:24.542890    5075 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:25:24.545954    5075 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:25:24.548985    5075 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:25:24.552285    5075 config.go:182] Loaded profile config "multinode-554000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:25:24.552331    5075 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:25:24.556971    5075 out.go:177] * Using the qemu2 driver based on user configuration
	I1207 12:25:24.563966    5075 start.go:298] selected driver: qemu2
	I1207 12:25:24.563974    5075 start.go:902] validating driver "qemu2" against <nil>
	I1207 12:25:24.563980    5075 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:25:24.566320    5075 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 12:25:24.568971    5075 out.go:177] * Automatically selected the socket_vmnet network
	I1207 12:25:24.572056    5075 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 12:25:24.572092    5075 cni.go:84] Creating CNI manager for "calico"
	I1207 12:25:24.572096    5075 start_flags.go:318] Found "Calico" CNI - setting NetworkPlugin=cni
	I1207 12:25:24.572103    5075 start_flags.go:323] config:
	{Name:calico-676000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-676000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHA
gentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:25:24.576658    5075 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:25:24.583946    5075 out.go:177] * Starting control plane node calico-676000 in cluster calico-676000
	I1207 12:25:24.586824    5075 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 12:25:24.586837    5075 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1207 12:25:24.586846    5075 cache.go:56] Caching tarball of preloaded images
	I1207 12:25:24.586901    5075 preload.go:174] Found /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 12:25:24.586906    5075 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1207 12:25:24.586959    5075 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/calico-676000/config.json ...
	I1207 12:25:24.586970    5075 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/calico-676000/config.json: {Name:mkca9aee88f77a606b05b1c17ed89d07cc0ea252 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:25:24.587171    5075 start.go:365] acquiring machines lock for calico-676000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:25:24.587201    5075 start.go:369] acquired machines lock for "calico-676000" in 24.667µs
	I1207 12:25:24.587213    5075 start.go:93] Provisioning new machine with config: &{Name:calico-676000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:calico-676000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:25:24.587253    5075 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:25:24.593936    5075 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1207 12:25:24.610850    5075 start.go:159] libmachine.API.Create for "calico-676000" (driver="qemu2")
	I1207 12:25:24.610886    5075 client.go:168] LocalClient.Create starting
	I1207 12:25:24.610942    5075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:25:24.610973    5075 main.go:141] libmachine: Decoding PEM data...
	I1207 12:25:24.610981    5075 main.go:141] libmachine: Parsing certificate...
	I1207 12:25:24.611021    5075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:25:24.611048    5075 main.go:141] libmachine: Decoding PEM data...
	I1207 12:25:24.611062    5075 main.go:141] libmachine: Parsing certificate...
	I1207 12:25:24.611404    5075 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:25:24.736803    5075 main.go:141] libmachine: Creating SSH key...
	I1207 12:25:24.798924    5075 main.go:141] libmachine: Creating Disk image...
	I1207 12:25:24.798930    5075 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:25:24.799097    5075 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/calico-676000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/calico-676000/disk.qcow2
	I1207 12:25:24.811113    5075 main.go:141] libmachine: STDOUT: 
	I1207 12:25:24.811132    5075 main.go:141] libmachine: STDERR: 
	I1207 12:25:24.811209    5075 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/calico-676000/disk.qcow2 +20000M
	I1207 12:25:24.822100    5075 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:25:24.822116    5075 main.go:141] libmachine: STDERR: 
	I1207 12:25:24.822140    5075 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/calico-676000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/calico-676000/disk.qcow2
	I1207 12:25:24.822145    5075 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:25:24.822176    5075 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/calico-676000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/calico-676000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/calico-676000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:4d:36:b2:ad:67 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/calico-676000/disk.qcow2
	I1207 12:25:24.823865    5075 main.go:141] libmachine: STDOUT: 
	I1207 12:25:24.823882    5075 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:25:24.823903    5075 client.go:171] LocalClient.Create took 213.013834ms
	I1207 12:25:26.826076    5075 start.go:128] duration metric: createHost completed in 2.238839458s
	I1207 12:25:26.826156    5075 start.go:83] releasing machines lock for "calico-676000", held for 2.238983875s
	W1207 12:25:26.826244    5075 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:25:26.841501    5075 out.go:177] * Deleting "calico-676000" in qemu2 ...
	W1207 12:25:26.864512    5075 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:25:26.864541    5075 start.go:709] Will try again in 5 seconds ...
	I1207 12:25:31.866674    5075 start.go:365] acquiring machines lock for calico-676000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:25:31.867079    5075 start.go:369] acquired machines lock for "calico-676000" in 289.083µs
	I1207 12:25:31.867192    5075 start.go:93] Provisioning new machine with config: &{Name:calico-676000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.28.4 ClusterName:calico-676000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:25:31.867503    5075 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:25:31.877161    5075 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1207 12:25:31.927197    5075 start.go:159] libmachine.API.Create for "calico-676000" (driver="qemu2")
	I1207 12:25:31.927257    5075 client.go:168] LocalClient.Create starting
	I1207 12:25:31.927382    5075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:25:31.927438    5075 main.go:141] libmachine: Decoding PEM data...
	I1207 12:25:31.927454    5075 main.go:141] libmachine: Parsing certificate...
	I1207 12:25:31.927520    5075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:25:31.927561    5075 main.go:141] libmachine: Decoding PEM data...
	I1207 12:25:31.927573    5075 main.go:141] libmachine: Parsing certificate...
	I1207 12:25:31.928093    5075 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:25:32.065739    5075 main.go:141] libmachine: Creating SSH key...
	I1207 12:25:32.187214    5075 main.go:141] libmachine: Creating Disk image...
	I1207 12:25:32.187223    5075 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:25:32.187424    5075 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/calico-676000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/calico-676000/disk.qcow2
	I1207 12:25:32.199501    5075 main.go:141] libmachine: STDOUT: 
	I1207 12:25:32.199562    5075 main.go:141] libmachine: STDERR: 
	I1207 12:25:32.199626    5075 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/calico-676000/disk.qcow2 +20000M
	I1207 12:25:32.210310    5075 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:25:32.210335    5075 main.go:141] libmachine: STDERR: 
	I1207 12:25:32.210353    5075 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/calico-676000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/calico-676000/disk.qcow2
	I1207 12:25:32.210364    5075 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:25:32.210410    5075 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/calico-676000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/calico-676000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/calico-676000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:ea:aa:68:e1:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/calico-676000/disk.qcow2
	I1207 12:25:32.212147    5075 main.go:141] libmachine: STDOUT: 
	I1207 12:25:32.212162    5075 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:25:32.212176    5075 client.go:171] LocalClient.Create took 284.919125ms
	I1207 12:25:34.214310    5075 start.go:128] duration metric: createHost completed in 2.346816666s
	I1207 12:25:34.214378    5075 start.go:83] releasing machines lock for "calico-676000", held for 2.347318291s
	W1207 12:25:34.214769    5075 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-676000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-676000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:25:34.224383    5075 out.go:177] 
	W1207 12:25:34.231484    5075 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:25:34.231555    5075 out.go:239] * 
	* 
	W1207 12:25:34.234509    5075 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:25:34.245186    5075 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-676000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-676000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.776912917s)

                                                
                                                
-- stdout --
	* [false-676000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node false-676000 in cluster false-676000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-676000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:25:36.715664    5199 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:25:36.715819    5199 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:25:36.715822    5199 out.go:309] Setting ErrFile to fd 2...
	I1207 12:25:36.715824    5199 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:25:36.715966    5199 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:25:36.717001    5199 out.go:303] Setting JSON to false
	I1207 12:25:36.732896    5199 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3307,"bootTime":1701977429,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:25:36.732992    5199 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:25:36.738845    5199 out.go:177] * [false-676000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:25:36.746767    5199 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:25:36.746813    5199 notify.go:220] Checking for updates...
	I1207 12:25:36.751832    5199 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:25:36.754821    5199 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:25:36.757783    5199 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:25:36.760811    5199 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:25:36.763780    5199 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:25:36.767134    5199 config.go:182] Loaded profile config "multinode-554000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:25:36.767182    5199 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:25:36.771730    5199 out.go:177] * Using the qemu2 driver based on user configuration
	I1207 12:25:36.778691    5199 start.go:298] selected driver: qemu2
	I1207 12:25:36.778697    5199 start.go:902] validating driver "qemu2" against <nil>
	I1207 12:25:36.778702    5199 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:25:36.781109    5199 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 12:25:36.784841    5199 out.go:177] * Automatically selected the socket_vmnet network
	I1207 12:25:36.787917    5199 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 12:25:36.787981    5199 cni.go:84] Creating CNI manager for "false"
	I1207 12:25:36.787986    5199 start_flags.go:323] config:
	{Name:false-676000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:false-676000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPI
D:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:25:36.792542    5199 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:25:36.799771    5199 out.go:177] * Starting control plane node false-676000 in cluster false-676000
	I1207 12:25:36.803542    5199 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 12:25:36.803555    5199 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1207 12:25:36.803565    5199 cache.go:56] Caching tarball of preloaded images
	I1207 12:25:36.803620    5199 preload.go:174] Found /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 12:25:36.803625    5199 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1207 12:25:36.803693    5199 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/false-676000/config.json ...
	I1207 12:25:36.803707    5199 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/false-676000/config.json: {Name:mk3bdc70281dd81707ce00389bf1174cbe28d1ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:25:36.803919    5199 start.go:365] acquiring machines lock for false-676000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:25:36.803952    5199 start.go:369] acquired machines lock for "false-676000" in 27.458µs
	I1207 12:25:36.803963    5199 start.go:93] Provisioning new machine with config: &{Name:false-676000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:false-676000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:25:36.803997    5199 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:25:36.811625    5199 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1207 12:25:36.828527    5199 start.go:159] libmachine.API.Create for "false-676000" (driver="qemu2")
	I1207 12:25:36.828555    5199 client.go:168] LocalClient.Create starting
	I1207 12:25:36.828619    5199 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:25:36.828650    5199 main.go:141] libmachine: Decoding PEM data...
	I1207 12:25:36.828665    5199 main.go:141] libmachine: Parsing certificate...
	I1207 12:25:36.828705    5199 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:25:36.828727    5199 main.go:141] libmachine: Decoding PEM data...
	I1207 12:25:36.828735    5199 main.go:141] libmachine: Parsing certificate...
	I1207 12:25:36.829073    5199 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:25:36.954716    5199 main.go:141] libmachine: Creating SSH key...
	I1207 12:25:37.030688    5199 main.go:141] libmachine: Creating Disk image...
	I1207 12:25:37.030695    5199 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:25:37.030871    5199 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/false-676000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/false-676000/disk.qcow2
	I1207 12:25:37.043130    5199 main.go:141] libmachine: STDOUT: 
	I1207 12:25:37.043149    5199 main.go:141] libmachine: STDERR: 
	I1207 12:25:37.043211    5199 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/false-676000/disk.qcow2 +20000M
	I1207 12:25:37.053656    5199 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:25:37.053671    5199 main.go:141] libmachine: STDERR: 
	I1207 12:25:37.053689    5199 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/false-676000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/false-676000/disk.qcow2
	I1207 12:25:37.053693    5199 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:25:37.053726    5199 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/false-676000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/false-676000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/false-676000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:63:08:26:1f:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/false-676000/disk.qcow2
	I1207 12:25:37.055507    5199 main.go:141] libmachine: STDOUT: 
	I1207 12:25:37.055522    5199 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:25:37.055548    5199 client.go:171] LocalClient.Create took 226.9825ms
	I1207 12:25:39.057709    5199 start.go:128] duration metric: createHost completed in 2.253730083s
	I1207 12:25:39.057779    5199 start.go:83] releasing machines lock for "false-676000", held for 2.253858875s
	W1207 12:25:39.057898    5199 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:25:39.071118    5199 out.go:177] * Deleting "false-676000" in qemu2 ...
	W1207 12:25:39.094286    5199 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:25:39.094319    5199 start.go:709] Will try again in 5 seconds ...
	I1207 12:25:44.095731    5199 start.go:365] acquiring machines lock for false-676000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:25:44.096218    5199 start.go:369] acquired machines lock for "false-676000" in 359.958µs
	I1207 12:25:44.096399    5199 start.go:93] Provisioning new machine with config: &{Name:false-676000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:false-676000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:25:44.096667    5199 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:25:44.105401    5199 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1207 12:25:44.155219    5199 start.go:159] libmachine.API.Create for "false-676000" (driver="qemu2")
	I1207 12:25:44.155269    5199 client.go:168] LocalClient.Create starting
	I1207 12:25:44.155406    5199 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:25:44.155481    5199 main.go:141] libmachine: Decoding PEM data...
	I1207 12:25:44.155506    5199 main.go:141] libmachine: Parsing certificate...
	I1207 12:25:44.155580    5199 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:25:44.155640    5199 main.go:141] libmachine: Decoding PEM data...
	I1207 12:25:44.155665    5199 main.go:141] libmachine: Parsing certificate...
	I1207 12:25:44.156209    5199 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:25:44.293664    5199 main.go:141] libmachine: Creating SSH key...
	I1207 12:25:44.385793    5199 main.go:141] libmachine: Creating Disk image...
	I1207 12:25:44.385799    5199 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:25:44.385980    5199 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/false-676000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/false-676000/disk.qcow2
	I1207 12:25:44.397934    5199 main.go:141] libmachine: STDOUT: 
	I1207 12:25:44.397955    5199 main.go:141] libmachine: STDERR: 
	I1207 12:25:44.398015    5199 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/false-676000/disk.qcow2 +20000M
	I1207 12:25:44.408385    5199 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:25:44.408399    5199 main.go:141] libmachine: STDERR: 
	I1207 12:25:44.408415    5199 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/false-676000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/false-676000/disk.qcow2
	I1207 12:25:44.408421    5199 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:25:44.408473    5199 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/false-676000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/false-676000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/false-676000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:13:5c:f3:57:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/false-676000/disk.qcow2
	I1207 12:25:44.410121    5199 main.go:141] libmachine: STDOUT: 
	I1207 12:25:44.410141    5199 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:25:44.410154    5199 client.go:171] LocalClient.Create took 254.88475ms
	I1207 12:25:46.412433    5199 start.go:128] duration metric: createHost completed in 2.315665208s
	I1207 12:25:46.412498    5199 start.go:83] releasing machines lock for "false-676000", held for 2.316297875s
	W1207 12:25:46.412905    5199 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-676000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-676000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:25:46.425512    5199 out.go:177] 
	W1207 12:25:46.429651    5199 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:25:46.429680    5199 out.go:239] * 
	* 
	W1207 12:25:46.438824    5199 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:25:46.446487    5199 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-643000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-643000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (9.881628208s)

                                                
                                                
-- stdout --
	* [old-k8s-version-643000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node old-k8s-version-643000 in cluster old-k8s-version-643000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-643000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:25:48.715342    5314 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:25:48.715502    5314 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:25:48.715505    5314 out.go:309] Setting ErrFile to fd 2...
	I1207 12:25:48.715508    5314 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:25:48.715640    5314 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:25:48.716665    5314 out.go:303] Setting JSON to false
	I1207 12:25:48.732421    5314 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3319,"bootTime":1701977429,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:25:48.732508    5314 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:25:48.738688    5314 out.go:177] * [old-k8s-version-643000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:25:48.745647    5314 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:25:48.750647    5314 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:25:48.745716    5314 notify.go:220] Checking for updates...
	I1207 12:25:48.756582    5314 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:25:48.759636    5314 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:25:48.762524    5314 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:25:48.765639    5314 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:25:48.768955    5314 config.go:182] Loaded profile config "multinode-554000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:25:48.768996    5314 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:25:48.772596    5314 out.go:177] * Using the qemu2 driver based on user configuration
	I1207 12:25:48.779619    5314 start.go:298] selected driver: qemu2
	I1207 12:25:48.779625    5314 start.go:902] validating driver "qemu2" against <nil>
	I1207 12:25:48.779633    5314 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:25:48.781951    5314 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 12:25:48.783189    5314 out.go:177] * Automatically selected the socket_vmnet network
	I1207 12:25:48.785709    5314 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 12:25:48.785763    5314 cni.go:84] Creating CNI manager for ""
	I1207 12:25:48.785769    5314 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1207 12:25:48.785776    5314 start_flags.go:323] config:
	{Name:old-k8s-version-643000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-643000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSoc
k: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:25:48.790302    5314 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:25:48.797511    5314 out.go:177] * Starting control plane node old-k8s-version-643000 in cluster old-k8s-version-643000
	I1207 12:25:48.801662    5314 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1207 12:25:48.801680    5314 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1207 12:25:48.801689    5314 cache.go:56] Caching tarball of preloaded images
	I1207 12:25:48.801749    5314 preload.go:174] Found /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 12:25:48.801754    5314 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1207 12:25:48.801833    5314 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/old-k8s-version-643000/config.json ...
	I1207 12:25:48.801843    5314 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/old-k8s-version-643000/config.json: {Name:mk0b8db6a2ea972652a8186038fc7e26447d0efc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:25:48.802050    5314 start.go:365] acquiring machines lock for old-k8s-version-643000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:25:48.802085    5314 start.go:369] acquired machines lock for "old-k8s-version-643000" in 27.584µs
	I1207 12:25:48.802098    5314 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-643000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-643000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:25:48.802139    5314 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:25:48.810585    5314 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1207 12:25:48.827338    5314 start.go:159] libmachine.API.Create for "old-k8s-version-643000" (driver="qemu2")
	I1207 12:25:48.827363    5314 client.go:168] LocalClient.Create starting
	I1207 12:25:48.827432    5314 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:25:48.827461    5314 main.go:141] libmachine: Decoding PEM data...
	I1207 12:25:48.827471    5314 main.go:141] libmachine: Parsing certificate...
	I1207 12:25:48.827513    5314 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:25:48.827534    5314 main.go:141] libmachine: Decoding PEM data...
	I1207 12:25:48.827542    5314 main.go:141] libmachine: Parsing certificate...
	I1207 12:25:48.827908    5314 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:25:48.955699    5314 main.go:141] libmachine: Creating SSH key...
	I1207 12:25:49.176986    5314 main.go:141] libmachine: Creating Disk image...
	I1207 12:25:49.176995    5314 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:25:49.177185    5314 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/disk.qcow2
	I1207 12:25:49.189618    5314 main.go:141] libmachine: STDOUT: 
	I1207 12:25:49.189646    5314 main.go:141] libmachine: STDERR: 
	I1207 12:25:49.189698    5314 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/disk.qcow2 +20000M
	I1207 12:25:49.200032    5314 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:25:49.200056    5314 main.go:141] libmachine: STDERR: 
	I1207 12:25:49.200077    5314 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/disk.qcow2
	I1207 12:25:49.200082    5314 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:25:49.200119    5314 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:f2:99:1c:57:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/disk.qcow2
	I1207 12:25:49.201791    5314 main.go:141] libmachine: STDOUT: 
	I1207 12:25:49.201810    5314 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:25:49.201825    5314 client.go:171] LocalClient.Create took 374.464917ms
	I1207 12:25:51.203963    5314 start.go:128] duration metric: createHost completed in 2.401850125s
	I1207 12:25:51.204017    5314 start.go:83] releasing machines lock for "old-k8s-version-643000", held for 2.401967292s
	W1207 12:25:51.204081    5314 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:25:51.210286    5314 out.go:177] * Deleting "old-k8s-version-643000" in qemu2 ...
	W1207 12:25:51.234372    5314 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:25:51.234400    5314 start.go:709] Will try again in 5 seconds ...
	I1207 12:25:56.236501    5314 start.go:365] acquiring machines lock for old-k8s-version-643000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:25:56.236814    5314 start.go:369] acquired machines lock for "old-k8s-version-643000" in 221.541µs
	I1207 12:25:56.236902    5314 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-643000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-643000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:25:56.237113    5314 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:25:56.246166    5314 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1207 12:25:56.288200    5314 start.go:159] libmachine.API.Create for "old-k8s-version-643000" (driver="qemu2")
	I1207 12:25:56.288238    5314 client.go:168] LocalClient.Create starting
	I1207 12:25:56.288399    5314 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:25:56.288497    5314 main.go:141] libmachine: Decoding PEM data...
	I1207 12:25:56.288525    5314 main.go:141] libmachine: Parsing certificate...
	I1207 12:25:56.288600    5314 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:25:56.288647    5314 main.go:141] libmachine: Decoding PEM data...
	I1207 12:25:56.288662    5314 main.go:141] libmachine: Parsing certificate...
	I1207 12:25:56.289185    5314 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:25:56.424492    5314 main.go:141] libmachine: Creating SSH key...
	I1207 12:25:56.492195    5314 main.go:141] libmachine: Creating Disk image...
	I1207 12:25:56.492202    5314 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:25:56.492379    5314 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/disk.qcow2
	I1207 12:25:56.504782    5314 main.go:141] libmachine: STDOUT: 
	I1207 12:25:56.504801    5314 main.go:141] libmachine: STDERR: 
	I1207 12:25:56.504880    5314 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/disk.qcow2 +20000M
	I1207 12:25:56.515814    5314 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:25:56.515854    5314 main.go:141] libmachine: STDERR: 
	I1207 12:25:56.515867    5314 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/disk.qcow2
	I1207 12:25:56.515873    5314 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:25:56.515908    5314 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:44:f1:6b:f6:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/disk.qcow2
	I1207 12:25:56.517621    5314 main.go:141] libmachine: STDOUT: 
	I1207 12:25:56.517638    5314 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:25:56.517651    5314 client.go:171] LocalClient.Create took 229.413708ms
	I1207 12:25:58.519821    5314 start.go:128] duration metric: createHost completed in 2.28272225s
	I1207 12:25:58.519877    5314 start.go:83] releasing machines lock for "old-k8s-version-643000", held for 2.28308275s
	W1207 12:25:58.520370    5314 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-643000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-643000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:25:58.534436    5314 out.go:177] 
	W1207 12:25:58.539504    5314 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:25:58.539546    5314 out.go:239] * 
	* 
	W1207 12:25:58.541881    5314 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:25:58.551351    5314 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-643000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-643000 -n old-k8s-version-643000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-643000 -n old-k8s-version-643000: exit status 7 (66.985041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-643000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (2.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2602786645.exe start -p stopped-upgrade-867000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2602786645.exe start -p stopped-upgrade-867000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2602786645.exe: permission denied (8.877834ms)
version_upgrade_test.go:196: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2602786645.exe start -p stopped-upgrade-867000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2602786645.exe start -p stopped-upgrade-867000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2602786645.exe: permission denied (8.354917ms)
E1207 12:25:52.852764    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/functional-469000/client.crt: no such file or directory
version_upgrade_test.go:196: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2602786645.exe start -p stopped-upgrade-867000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:196: (dbg) Non-zero exit: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2602786645.exe start -p stopped-upgrade-867000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2602786645.exe: permission denied (7.82875ms)
version_upgrade_test.go:202: legacy v1.6.2 start failed: fork/exec /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.6.2.2602786645.exe: permission denied
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (2.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-867000
version_upgrade_test.go:219: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p stopped-upgrade-867000: exit status 85 (123.02775ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p calico-676000 sudo                                | calico-676000          | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | systemctl status kubelet --all                       |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p calico-676000 sudo                                | calico-676000          | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | systemctl cat kubelet                                |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p calico-676000 sudo                                | calico-676000          | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | journalctl -xeu kubelet --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p calico-676000 sudo cat                            | calico-676000          | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                        |         |         |                     |                     |
	| ssh     | -p calico-676000 sudo cat                            | calico-676000          | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | /var/lib/kubelet/config.yaml                         |                        |         |         |                     |                     |
	| ssh     | -p calico-676000 sudo                                | calico-676000          | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | systemctl status docker --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p calico-676000 sudo                                | calico-676000          | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | systemctl cat docker                                 |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p calico-676000 sudo cat                            | calico-676000          | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | /etc/docker/daemon.json                              |                        |         |         |                     |                     |
	| ssh     | -p calico-676000 sudo docker                         | calico-676000          | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | system info                                          |                        |         |         |                     |                     |
	| ssh     | -p calico-676000 sudo                                | calico-676000          | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | systemctl status cri-docker                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p calico-676000 sudo                                | calico-676000          | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | systemctl cat cri-docker                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p calico-676000 sudo cat                            | calico-676000          | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p calico-676000 sudo cat                            | calico-676000          | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                        |         |         |                     |                     |
	| ssh     | -p calico-676000 sudo                                | calico-676000          | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p calico-676000 sudo                                | calico-676000          | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | systemctl status containerd                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p calico-676000 sudo                                | calico-676000          | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | systemctl cat containerd                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p calico-676000 sudo cat                            | calico-676000          | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p calico-676000 sudo cat                            | calico-676000          | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p calico-676000 sudo                                | calico-676000          | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | containerd config dump                               |                        |         |         |                     |                     |
	| ssh     | -p calico-676000 sudo                                | calico-676000          | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | systemctl status crio --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p calico-676000 sudo                                | calico-676000          | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | systemctl cat crio --no-pager                        |                        |         |         |                     |                     |
	| ssh     | -p calico-676000 sudo find                           | calico-676000          | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p calico-676000 sudo crio                           | calico-676000          | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p calico-676000                                     | calico-676000          | jenkins | v1.32.0 | 07 Dec 23 12:25 PST | 07 Dec 23 12:25 PST |
	| start   | -p false-676000 --memory=3072                        | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | --alsologtostderr --wait=true                        |                        |         |         |                     |                     |
	|         | --wait-timeout=15m --cni=false                       |                        |         |         |                     |                     |
	|         | --driver=qemu2                                       |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo cat                             | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | /etc/nsswitch.conf                                   |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo cat                             | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | /etc/hosts                                           |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo cat                             | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | /etc/resolv.conf                                     |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo crictl                          | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | pods                                                 |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo crictl ps                       | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | --all                                                |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo find                            | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | /etc/cni -type f -exec sh -c                         |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo ip a s                          | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	| ssh     | -p false-676000 sudo ip r s                          | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	| ssh     | -p false-676000 sudo                                 | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | iptables-save                                        |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo iptables                        | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | -t nat -L -n -v                                      |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo systemctl                       | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | status kubelet --all --full                          |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo systemctl                       | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | cat kubelet --no-pager                               |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo                                 | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | journalctl -xeu kubelet --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo cat                             | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo cat                             | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | /var/lib/kubelet/config.yaml                         |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo systemctl                       | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | status docker --all --full                           |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo systemctl                       | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | cat docker --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo cat                             | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | /etc/docker/daemon.json                              |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo docker                          | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | system info                                          |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo systemctl                       | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | status cri-docker --all --full                       |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo systemctl                       | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | cat cri-docker --no-pager                            |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo cat                             | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo cat                             | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo                                 | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo systemctl                       | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | status containerd --all --full                       |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo systemctl                       | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | cat containerd --no-pager                            |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo cat                             | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo cat                             | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo                                 | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | containerd config dump                               |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo systemctl                       | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | status crio --all --full                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo systemctl                       | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | cat crio --no-pager                                  |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo find                            | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p false-676000 sudo crio                            | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p false-676000                                      | false-676000           | jenkins | v1.32.0 | 07 Dec 23 12:25 PST | 07 Dec 23 12:25 PST |
	| start   | -p old-k8s-version-643000                            | old-k8s-version-643000 | jenkins | v1.32.0 | 07 Dec 23 12:25 PST |                     |
	|         | --memory=2200                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                        |         |         |                     |                     |
	|         | --kvm-network=default                                |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                        |         |         |                     |                     |
	|         | --keep-context=false                                 |                        |         |         |                     |                     |
	|         | --driver=qemu2                                       |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                        |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 12:25:48
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 12:25:48.715342    5314 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:25:48.715502    5314 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:25:48.715505    5314 out.go:309] Setting ErrFile to fd 2...
	I1207 12:25:48.715508    5314 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:25:48.715640    5314 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:25:48.716665    5314 out.go:303] Setting JSON to false
	I1207 12:25:48.732421    5314 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3319,"bootTime":1701977429,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:25:48.732508    5314 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:25:48.738688    5314 out.go:177] * [old-k8s-version-643000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:25:48.745647    5314 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:25:48.750647    5314 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:25:48.745716    5314 notify.go:220] Checking for updates...
	I1207 12:25:48.756582    5314 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:25:48.759636    5314 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:25:48.762524    5314 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:25:48.765639    5314 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:25:48.768955    5314 config.go:182] Loaded profile config "multinode-554000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:25:48.768996    5314 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:25:48.772596    5314 out.go:177] * Using the qemu2 driver based on user configuration
	I1207 12:25:48.779619    5314 start.go:298] selected driver: qemu2
	I1207 12:25:48.779625    5314 start.go:902] validating driver "qemu2" against <nil>
	I1207 12:25:48.779633    5314 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:25:48.781951    5314 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 12:25:48.783189    5314 out.go:177] * Automatically selected the socket_vmnet network
	I1207 12:25:48.785709    5314 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 12:25:48.785763    5314 cni.go:84] Creating CNI manager for ""
	I1207 12:25:48.785769    5314 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1207 12:25:48.785776    5314 start_flags.go:323] config:
	{Name:old-k8s-version-643000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-643000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSoc
k: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:25:48.790302    5314 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:25:48.797511    5314 out.go:177] * Starting control plane node old-k8s-version-643000 in cluster old-k8s-version-643000
	I1207 12:25:48.801662    5314 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1207 12:25:48.801680    5314 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1207 12:25:48.801689    5314 cache.go:56] Caching tarball of preloaded images
	I1207 12:25:48.801749    5314 preload.go:174] Found /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 12:25:48.801754    5314 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1207 12:25:48.801833    5314 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/old-k8s-version-643000/config.json ...
	I1207 12:25:48.801843    5314 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/old-k8s-version-643000/config.json: {Name:mk0b8db6a2ea972652a8186038fc7e26447d0efc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:25:48.802050    5314 start.go:365] acquiring machines lock for old-k8s-version-643000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:25:48.802085    5314 start.go:369] acquired machines lock for "old-k8s-version-643000" in 27.584µs
	I1207 12:25:48.802098    5314 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-643000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-643000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:25:48.802139    5314 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:25:48.810585    5314 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1207 12:25:48.827338    5314 start.go:159] libmachine.API.Create for "old-k8s-version-643000" (driver="qemu2")
	I1207 12:25:48.827363    5314 client.go:168] LocalClient.Create starting
	I1207 12:25:48.827432    5314 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:25:48.827461    5314 main.go:141] libmachine: Decoding PEM data...
	I1207 12:25:48.827471    5314 main.go:141] libmachine: Parsing certificate...
	I1207 12:25:48.827513    5314 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:25:48.827534    5314 main.go:141] libmachine: Decoding PEM data...
	I1207 12:25:48.827542    5314 main.go:141] libmachine: Parsing certificate...
	I1207 12:25:48.827908    5314 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:25:48.955699    5314 main.go:141] libmachine: Creating SSH key...
	I1207 12:25:49.176986    5314 main.go:141] libmachine: Creating Disk image...
	I1207 12:25:49.176995    5314 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:25:49.177185    5314 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/disk.qcow2
	I1207 12:25:49.189618    5314 main.go:141] libmachine: STDOUT: 
	I1207 12:25:49.189646    5314 main.go:141] libmachine: STDERR: 
	I1207 12:25:49.189698    5314 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/disk.qcow2 +20000M
	I1207 12:25:49.200032    5314 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:25:49.200056    5314 main.go:141] libmachine: STDERR: 
	I1207 12:25:49.200077    5314 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/disk.qcow2
	I1207 12:25:49.200082    5314 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:25:49.200119    5314 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:f2:99:1c:57:ba -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/disk.qcow2
	I1207 12:25:49.201791    5314 main.go:141] libmachine: STDOUT: 
	I1207 12:25:49.201810    5314 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:25:49.201825    5314 client.go:171] LocalClient.Create took 374.464917ms
	I1207 12:25:51.203963    5314 start.go:128] duration metric: createHost completed in 2.401850125s
	I1207 12:25:51.204017    5314 start.go:83] releasing machines lock for "old-k8s-version-643000", held for 2.401967292s
	W1207 12:25:51.204081    5314 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:25:51.210286    5314 out.go:177] * Deleting "old-k8s-version-643000" in qemu2 ...
	
	* 
	* Profile "stopped-upgrade-867000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p stopped-upgrade-867000"

                                                
                                                
-- /stdout --
version_upgrade_test.go:221: `minikube logs` after upgrade to HEAD from v1.6.2 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-052000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-052000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.1: exit status 80 (9.922244s)

                                                
                                                
-- stdout --
	* [no-preload-052000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node no-preload-052000 in cluster no-preload-052000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-052000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:25:53.874113    5349 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:25:53.874267    5349 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:25:53.874270    5349 out.go:309] Setting ErrFile to fd 2...
	I1207 12:25:53.874273    5349 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:25:53.874406    5349 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:25:53.875427    5349 out.go:303] Setting JSON to false
	I1207 12:25:53.891471    5349 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3324,"bootTime":1701977429,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:25:53.891550    5349 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:25:53.896758    5349 out.go:177] * [no-preload-052000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:25:53.903889    5349 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:25:53.906872    5349 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:25:53.903944    5349 notify.go:220] Checking for updates...
	I1207 12:25:53.913867    5349 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:25:53.916868    5349 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:25:53.919859    5349 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:25:53.922898    5349 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:25:53.924764    5349 config.go:182] Loaded profile config "multinode-554000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:25:53.924836    5349 config.go:182] Loaded profile config "old-k8s-version-643000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1207 12:25:53.924875    5349 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:25:53.928821    5349 out.go:177] * Using the qemu2 driver based on user configuration
	I1207 12:25:53.935667    5349 start.go:298] selected driver: qemu2
	I1207 12:25:53.935673    5349 start.go:902] validating driver "qemu2" against <nil>
	I1207 12:25:53.935679    5349 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:25:53.938066    5349 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 12:25:53.940910    5349 out.go:177] * Automatically selected the socket_vmnet network
	I1207 12:25:53.944012    5349 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 12:25:53.944065    5349 cni.go:84] Creating CNI manager for ""
	I1207 12:25:53.944073    5349 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 12:25:53.944079    5349 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1207 12:25:53.944083    5349 start_flags.go:323] config:
	{Name:no-preload-052000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:no-preload-052000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock
: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:25:53.948765    5349 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:25:53.955869    5349 out.go:177] * Starting control plane node no-preload-052000 in cluster no-preload-052000
	I1207 12:25:53.959938    5349 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime docker
	I1207 12:25:53.960041    5349 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/no-preload-052000/config.json ...
	I1207 12:25:53.960064    5349 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/no-preload-052000/config.json: {Name:mk408a386c64e45cb61a0a9315486664ac41ae68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:25:53.960064    5349 cache.go:107] acquiring lock: {Name:mke0e27a3799c58f785465e2d8474f5f8b54763f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:25:53.960064    5349 cache.go:107] acquiring lock: {Name:mkddf3dce2c990633eec184898b526fb432bbf7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:25:53.960089    5349 cache.go:107] acquiring lock: {Name:mk7d81fa01bcabe1d894043b5b1a6b542405f18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:25:53.960119    5349 cache.go:107] acquiring lock: {Name:mk4c96312c04bf74ef291a6af14b08d31e00d367 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:25:53.960133    5349 cache.go:115] /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1207 12:25:53.960141    5349 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 79.416µs
	I1207 12:25:53.960153    5349 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1207 12:25:53.960213    5349 cache.go:107] acquiring lock: {Name:mk2606e51800209d8c53dcbbee1a143784c588f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:25:53.960246    5349 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 12:25:53.960258    5349 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 12:25:53.960269    5349 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 12:25:53.960248    5349 cache.go:107] acquiring lock: {Name:mk9b39d8c5199918cee88a11ece978918b16b169 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:25:53.960297    5349 cache.go:107] acquiring lock: {Name:mk2ad558c49824d4eacc56c0ecedde1d9039ef99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:25:53.960311    5349 cache.go:107] acquiring lock: {Name:mk7b4c5206c6b667758c550921562cfd2e4e5378 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:25:53.960447    5349 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 12:25:53.960474    5349 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I1207 12:25:53.960489    5349 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I1207 12:25:53.960520    5349 start.go:365] acquiring machines lock for no-preload-052000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:25:53.960540    5349 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I1207 12:25:53.960549    5349 start.go:369] acquired machines lock for "no-preload-052000" in 25µs
	I1207 12:25:53.960561    5349 start.go:93] Provisioning new machine with config: &{Name:no-preload-052000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:no-preload-052000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:25:53.960588    5349 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:25:53.968738    5349 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1207 12:25:53.974295    5349 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.1
	I1207 12:25:53.974328    5349 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I1207 12:25:53.974379    5349 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.1
	I1207 12:25:53.974404    5349 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I1207 12:25:53.974931    5349 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I1207 12:25:53.974956    5349 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.1
	I1207 12:25:53.976446    5349 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.1
	I1207 12:25:53.985051    5349 start.go:159] libmachine.API.Create for "no-preload-052000" (driver="qemu2")
	I1207 12:25:53.985074    5349 client.go:168] LocalClient.Create starting
	I1207 12:25:53.985143    5349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:25:53.985170    5349 main.go:141] libmachine: Decoding PEM data...
	I1207 12:25:53.985186    5349 main.go:141] libmachine: Parsing certificate...
	I1207 12:25:53.985219    5349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:25:53.985241    5349 main.go:141] libmachine: Decoding PEM data...
	I1207 12:25:53.985249    5349 main.go:141] libmachine: Parsing certificate...
	I1207 12:25:53.985612    5349 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:25:54.114073    5349 main.go:141] libmachine: Creating SSH key...
	I1207 12:25:54.169598    5349 main.go:141] libmachine: Creating Disk image...
	I1207 12:25:54.169614    5349 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:25:54.169820    5349 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/no-preload-052000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/no-preload-052000/disk.qcow2
	I1207 12:25:54.182956    5349 main.go:141] libmachine: STDOUT: 
	I1207 12:25:54.182974    5349 main.go:141] libmachine: STDERR: 
	I1207 12:25:54.183022    5349 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/no-preload-052000/disk.qcow2 +20000M
	I1207 12:25:54.195273    5349 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:25:54.195286    5349 main.go:141] libmachine: STDERR: 
	I1207 12:25:54.195298    5349 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/no-preload-052000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/no-preload-052000/disk.qcow2
	I1207 12:25:54.195303    5349 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:25:54.195339    5349 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/no-preload-052000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/no-preload-052000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/no-preload-052000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:6e:a7:15:1a:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/no-preload-052000/disk.qcow2
	I1207 12:25:54.197216    5349 main.go:141] libmachine: STDOUT: 
	I1207 12:25:54.197235    5349 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:25:54.197252    5349 client.go:171] LocalClient.Create took 212.176292ms
	I1207 12:25:54.557471    5349 cache.go:162] opening:  /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1
	I1207 12:25:54.605438    5349 cache.go:162] opening:  /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1
	I1207 12:25:54.613888    5349 cache.go:162] opening:  /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I1207 12:25:54.626810    5349 cache.go:162] opening:  /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I1207 12:25:54.646256    5349 cache.go:162] opening:  /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.1
	I1207 12:25:54.660166    5349 cache.go:162] opening:  /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0
	I1207 12:25:54.662933    5349 cache.go:162] opening:  /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1
	I1207 12:25:54.784330    5349 cache.go:157] /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I1207 12:25:54.784381    5349 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 824.181125ms
	I1207 12:25:54.784411    5349 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I1207 12:25:56.197428    5349 start.go:128] duration metric: createHost completed in 2.236849875s
	I1207 12:25:56.197482    5349 start.go:83] releasing machines lock for "no-preload-052000", held for 2.236966583s
	W1207 12:25:56.197541    5349 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:25:56.208103    5349 out.go:177] * Deleting "no-preload-052000" in qemu2 ...
	W1207 12:25:56.232972    5349 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:25:56.233004    5349 start.go:709] Will try again in 5 seconds ...
	I1207 12:25:57.634900    5349 cache.go:157] /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 exists
	I1207 12:25:57.634974    5349 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.1" -> "/Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.1" took 3.674917292s
	I1207 12:25:57.635000    5349 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.1 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 succeeded
	I1207 12:25:57.856362    5349 cache.go:157] /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I1207 12:25:57.856408    5349 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 3.896210125s
	I1207 12:25:57.856436    5349 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I1207 12:25:58.189249    5349 cache.go:157] /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 exists
	I1207 12:25:58.189323    5349 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" -> "/Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1" took 4.229326458s
	I1207 12:25:58.189370    5349 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.1 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 succeeded
	I1207 12:25:58.462026    5349 cache.go:157] /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 exists
	I1207 12:25:58.462072    5349 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" -> "/Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1" took 4.502081292s
	I1207 12:25:58.462099    5349 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.1 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 succeeded
	I1207 12:25:59.506135    5349 cache.go:157] /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 exists
	I1207 12:25:59.506176    5349 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" -> "/Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1" took 5.546088958s
	I1207 12:25:59.506230    5349 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.1 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 succeeded
	I1207 12:26:01.233332    5349 start.go:365] acquiring machines lock for no-preload-052000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:26:01.233697    5349 start.go:369] acquired machines lock for "no-preload-052000" in 291.792µs
	I1207 12:26:01.233804    5349 start.go:93] Provisioning new machine with config: &{Name:no-preload-052000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:no-preload-052000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:26:01.234006    5349 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:26:01.242460    5349 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1207 12:26:01.291324    5349 start.go:159] libmachine.API.Create for "no-preload-052000" (driver="qemu2")
	I1207 12:26:01.291396    5349 client.go:168] LocalClient.Create starting
	I1207 12:26:01.291558    5349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:26:01.291646    5349 main.go:141] libmachine: Decoding PEM data...
	I1207 12:26:01.291668    5349 main.go:141] libmachine: Parsing certificate...
	I1207 12:26:01.291748    5349 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:26:01.291789    5349 main.go:141] libmachine: Decoding PEM data...
	I1207 12:26:01.291807    5349 main.go:141] libmachine: Parsing certificate...
	I1207 12:26:01.292363    5349 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:26:01.435633    5349 main.go:141] libmachine: Creating SSH key...
	I1207 12:26:01.696451    5349 main.go:141] libmachine: Creating Disk image...
	I1207 12:26:01.696464    5349 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:26:01.696732    5349 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/no-preload-052000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/no-preload-052000/disk.qcow2
	I1207 12:26:01.709570    5349 main.go:141] libmachine: STDOUT: 
	I1207 12:26:01.709594    5349 main.go:141] libmachine: STDERR: 
	I1207 12:26:01.709649    5349 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/no-preload-052000/disk.qcow2 +20000M
	I1207 12:26:01.720381    5349 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:26:01.720398    5349 main.go:141] libmachine: STDERR: 
	I1207 12:26:01.720414    5349 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/no-preload-052000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/no-preload-052000/disk.qcow2
	I1207 12:26:01.720420    5349 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:26:01.720477    5349 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/no-preload-052000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/no-preload-052000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/no-preload-052000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:51:0b:00:af:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/no-preload-052000/disk.qcow2
	I1207 12:26:01.722410    5349 main.go:141] libmachine: STDOUT: 
	I1207 12:26:01.722428    5349 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:26:01.722452    5349 client.go:171] LocalClient.Create took 431.047292ms
	I1207 12:26:03.416588    5349 cache.go:157] /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 exists
	I1207 12:26:03.416670    5349 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0" took 9.456613s
	I1207 12:26:03.416706    5349 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I1207 12:26:03.416781    5349 cache.go:87] Successfully saved all images to host disk.
	I1207 12:26:03.724151    5349 start.go:128] duration metric: createHost completed in 2.490071125s
	I1207 12:26:03.724225    5349 start.go:83] releasing machines lock for "no-preload-052000", held for 2.490554625s
	W1207 12:26:03.724660    5349 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-052000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-052000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:26:03.732102    5349 out.go:177] 
	W1207 12:26:03.739250    5349 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:26:03.739318    5349 out.go:239] * 
	* 
	W1207 12:26:03.742011    5349 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:26:03.751219    5349 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-052000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000: exit status 7 (64.842666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-052000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-643000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-643000 create -f testdata/busybox.yaml: exit status 1 (28.282292ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-643000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-643000 -n old-k8s-version-643000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-643000 -n old-k8s-version-643000: exit status 7 (31.36075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-643000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-643000 -n old-k8s-version-643000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-643000 -n old-k8s-version-643000: exit status 7 (31.280042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-643000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-643000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-643000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-643000 describe deploy/metrics-server -n kube-system: exit status 1 (25.900792ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-643000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-643000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-643000 -n old-k8s-version-643000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-643000 -n old-k8s-version-643000: exit status 7 (31.470125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-643000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-643000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-643000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (5.211400958s)

                                                
                                                
-- stdout --
	* [old-k8s-version-643000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting control plane node old-k8s-version-643000 in cluster old-k8s-version-643000
	* Restarting existing qemu2 VM for "old-k8s-version-643000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-643000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:25:59.036639    5413 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:25:59.036784    5413 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:25:59.036787    5413 out.go:309] Setting ErrFile to fd 2...
	I1207 12:25:59.036790    5413 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:25:59.036915    5413 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:25:59.037915    5413 out.go:303] Setting JSON to false
	I1207 12:25:59.054139    5413 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3330,"bootTime":1701977429,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:25:59.054232    5413 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:25:59.057568    5413 out.go:177] * [old-k8s-version-643000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:25:59.068509    5413 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:25:59.064623    5413 notify.go:220] Checking for updates...
	I1207 12:25:59.076575    5413 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:25:59.083540    5413 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:25:59.091549    5413 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:25:59.099611    5413 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:25:59.107573    5413 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:25:59.111890    5413 config.go:182] Loaded profile config "old-k8s-version-643000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1207 12:25:59.116593    5413 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1207 12:25:59.120600    5413 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:25:59.123579    5413 out.go:177] * Using the qemu2 driver based on existing profile
	I1207 12:25:59.130605    5413 start.go:298] selected driver: qemu2
	I1207 12:25:59.130610    5413 start.go:902] validating driver "qemu2" against &{Name:old-k8s-version-643000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-643000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:25:59.130679    5413 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:25:59.133114    5413 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 12:25:59.133168    5413 cni.go:84] Creating CNI manager for ""
	I1207 12:25:59.133174    5413 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1207 12:25:59.133179    5413 start_flags.go:323] config:
	{Name:old-k8s-version-643000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-643000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:25:59.137629    5413 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:25:59.145594    5413 out.go:177] * Starting control plane node old-k8s-version-643000 in cluster old-k8s-version-643000
	I1207 12:25:59.149498    5413 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1207 12:25:59.149524    5413 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1207 12:25:59.149535    5413 cache.go:56] Caching tarball of preloaded images
	I1207 12:25:59.149597    5413 preload.go:174] Found /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 12:25:59.149602    5413 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1207 12:25:59.149662    5413 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/old-k8s-version-643000/config.json ...
	I1207 12:25:59.150060    5413 start.go:365] acquiring machines lock for old-k8s-version-643000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:25:59.150103    5413 start.go:369] acquired machines lock for "old-k8s-version-643000" in 21.792µs
	I1207 12:25:59.150111    5413 start.go:96] Skipping create...Using existing machine configuration
	I1207 12:25:59.150116    5413 fix.go:54] fixHost starting: 
	I1207 12:25:59.150232    5413 fix.go:102] recreateIfNeeded on old-k8s-version-643000: state=Stopped err=<nil>
	W1207 12:25:59.150240    5413 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 12:25:59.154632    5413 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-643000" ...
	I1207 12:25:59.162645    5413 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:44:f1:6b:f6:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/disk.qcow2
	I1207 12:25:59.164789    5413 main.go:141] libmachine: STDOUT: 
	I1207 12:25:59.164807    5413 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:25:59.164833    5413 fix.go:56] fixHost completed within 14.71425ms
	I1207 12:25:59.164836    5413 start.go:83] releasing machines lock for "old-k8s-version-643000", held for 14.728625ms
	W1207 12:25:59.164843    5413 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:25:59.164873    5413 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:25:59.164878    5413 start.go:709] Will try again in 5 seconds ...
	I1207 12:26:04.164962    5413 start.go:365] acquiring machines lock for old-k8s-version-643000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:26:04.165034    5413 start.go:369] acquired machines lock for "old-k8s-version-643000" in 50µs
	I1207 12:26:04.165052    5413 start.go:96] Skipping create...Using existing machine configuration
	I1207 12:26:04.165055    5413 fix.go:54] fixHost starting: 
	I1207 12:26:04.165187    5413 fix.go:102] recreateIfNeeded on old-k8s-version-643000: state=Stopped err=<nil>
	W1207 12:26:04.165192    5413 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 12:26:04.169720    5413 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-643000" ...
	I1207 12:26:04.177845    5413 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:44:f1:6b:f6:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/old-k8s-version-643000/disk.qcow2
	I1207 12:26:04.179748    5413 main.go:141] libmachine: STDOUT: 
	I1207 12:26:04.179778    5413 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:26:04.179795    5413 fix.go:56] fixHost completed within 14.740875ms
	I1207 12:26:04.179799    5413 start.go:83] releasing machines lock for "old-k8s-version-643000", held for 14.761167ms
	W1207 12:26:04.179847    5413 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-643000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-643000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:26:04.187908    5413 out.go:177] 
	W1207 12:26:04.190861    5413 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:26:04.190867    5413 out.go:239] * 
	* 
	W1207 12:26:04.191319    5413 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:26:04.203826    5413 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-643000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-643000 -n old-k8s-version-643000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-643000 -n old-k8s-version-643000: exit status 7 (34.152583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-643000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-052000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-052000 create -f testdata/busybox.yaml: exit status 1 (28.461625ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-052000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000: exit status 7 (31.458625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-052000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000: exit status 7 (30.793583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-052000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-052000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-052000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-052000 describe deploy/metrics-server -n kube-system: exit status 1 (25.639084ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-052000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-052000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000: exit status 7 (31.263041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-052000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-052000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-052000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.1: exit status 80 (5.206593417s)

                                                
                                                
-- stdout --
	* [no-preload-052000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node no-preload-052000 in cluster no-preload-052000
	* Restarting existing qemu2 VM for "no-preload-052000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-052000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:26:04.268577    5447 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:26:04.268750    5447 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:26:04.268759    5447 out.go:309] Setting ErrFile to fd 2...
	I1207 12:26:04.268762    5447 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:26:04.268901    5447 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:26:04.270465    5447 out.go:303] Setting JSON to false
	I1207 12:26:04.289525    5447 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3335,"bootTime":1701977429,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:26:04.289607    5447 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:26:04.293729    5447 out.go:177] * [no-preload-052000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:26:04.299723    5447 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:26:04.299722    5447 notify.go:220] Checking for updates...
	I1207 12:26:04.307740    5447 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:26:04.310695    5447 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:26:04.317787    5447 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:26:04.320772    5447 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:26:04.323800    5447 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:26:04.327054    5447 config.go:182] Loaded profile config "no-preload-052000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.1
	I1207 12:26:04.327308    5447 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:26:04.330752    5447 out.go:177] * Using the qemu2 driver based on existing profile
	I1207 12:26:04.337809    5447 start.go:298] selected driver: qemu2
	I1207 12:26:04.337819    5447 start.go:902] validating driver "qemu2" against &{Name:no-preload-052000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:no-preload-052000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNode
Requested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:26:04.337867    5447 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:26:04.340071    5447 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 12:26:04.340120    5447 cni.go:84] Creating CNI manager for ""
	I1207 12:26:04.340128    5447 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 12:26:04.340134    5447 start_flags.go:323] config:
	{Name:no-preload-052000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:no-preload-052000 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:
/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:26:04.344136    5447 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:26:04.351786    5447 out.go:177] * Starting control plane node no-preload-052000 in cluster no-preload-052000
	I1207 12:26:04.355816    5447 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime docker
	I1207 12:26:04.355899    5447 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/no-preload-052000/config.json ...
	I1207 12:26:04.355900    5447 cache.go:107] acquiring lock: {Name:mk7d81fa01bcabe1d894043b5b1a6b542405f18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:26:04.355918    5447 cache.go:107] acquiring lock: {Name:mk2606e51800209d8c53dcbbee1a143784c588f3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:26:04.355901    5447 cache.go:107] acquiring lock: {Name:mkddf3dce2c990633eec184898b526fb432bbf7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:26:04.355966    5447 cache.go:115] /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 exists
	I1207 12:26:04.355971    5447 cache.go:115] /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 exists
	I1207 12:26:04.355973    5447 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.1" -> "/Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1" took 80.291µs
	I1207 12:26:04.355979    5447 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.1 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.1 succeeded
	I1207 12:26:04.355978    5447 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.1" -> "/Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1" took 60.083µs
	I1207 12:26:04.355983    5447 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.1 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.1 succeeded
	I1207 12:26:04.355986    5447 cache.go:107] acquiring lock: {Name:mk9b39d8c5199918cee88a11ece978918b16b169 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:26:04.355989    5447 cache.go:107] acquiring lock: {Name:mk2ad558c49824d4eacc56c0ecedde1d9039ef99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:26:04.356012    5447 cache.go:115] /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1207 12:26:04.356018    5447 cache.go:115] /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I1207 12:26:04.356017    5447 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 118.167µs
	I1207 12:26:04.356021    5447 cache.go:115] /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 exists
	I1207 12:26:04.356022    5447 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 36.166µs
	I1207 12:26:04.356025    5447 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I1207 12:26:04.356025    5447 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0" took 35.75µs
	I1207 12:26:04.356028    5447 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I1207 12:26:04.356022    5447 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1207 12:26:04.356029    5447 cache.go:107] acquiring lock: {Name:mke0e27a3799c58f785465e2d8474f5f8b54763f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:26:04.356034    5447 cache.go:107] acquiring lock: {Name:mk4c96312c04bf74ef291a6af14b08d31e00d367 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:26:04.356028    5447 cache.go:107] acquiring lock: {Name:mk7b4c5206c6b667758c550921562cfd2e4e5378 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:26:04.356060    5447 cache.go:115] /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 exists
	I1207 12:26:04.356064    5447 cache.go:115] /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 exists
	I1207 12:26:04.356062    5447 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.1" -> "/Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1" took 33.667µs
	I1207 12:26:04.356067    5447 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.1 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.1 succeeded
	I1207 12:26:04.356067    5447 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.1" -> "/Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.1" took 33.667µs
	I1207 12:26:04.356071    5447 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.1 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.1 succeeded
	I1207 12:26:04.356116    5447 cache.go:115] /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I1207 12:26:04.356122    5447 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 95.25µs
	I1207 12:26:04.356126    5447 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I1207 12:26:04.356137    5447 cache.go:87] Successfully saved all images to host disk.
	I1207 12:26:04.356401    5447 start.go:365] acquiring machines lock for no-preload-052000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:26:04.356426    5447 start.go:369] acquired machines lock for "no-preload-052000" in 19.416µs
	I1207 12:26:04.356433    5447 start.go:96] Skipping create...Using existing machine configuration
	I1207 12:26:04.356439    5447 fix.go:54] fixHost starting: 
	I1207 12:26:04.356548    5447 fix.go:102] recreateIfNeeded on no-preload-052000: state=Stopped err=<nil>
	W1207 12:26:04.356555    5447 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 12:26:04.364770    5447 out.go:177] * Restarting existing qemu2 VM for "no-preload-052000" ...
	I1207 12:26:04.368785    5447 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/no-preload-052000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/no-preload-052000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/no-preload-052000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:51:0b:00:af:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/no-preload-052000/disk.qcow2
	I1207 12:26:04.371140    5447 main.go:141] libmachine: STDOUT: 
	I1207 12:26:04.371162    5447 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:26:04.371184    5447 fix.go:56] fixHost completed within 14.745583ms
	I1207 12:26:04.371188    5447 start.go:83] releasing machines lock for "no-preload-052000", held for 14.758625ms
	W1207 12:26:04.371197    5447 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:26:04.371243    5447 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:26:04.371247    5447 start.go:709] Will try again in 5 seconds ...
	I1207 12:26:09.373033    5447 start.go:365] acquiring machines lock for no-preload-052000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:26:09.373426    5447 start.go:369] acquired machines lock for "no-preload-052000" in 311.667µs
	I1207 12:26:09.373548    5447 start.go:96] Skipping create...Using existing machine configuration
	I1207 12:26:09.373568    5447 fix.go:54] fixHost starting: 
	I1207 12:26:09.374318    5447 fix.go:102] recreateIfNeeded on no-preload-052000: state=Stopped err=<nil>
	W1207 12:26:09.374343    5447 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 12:26:09.389729    5447 out.go:177] * Restarting existing qemu2 VM for "no-preload-052000" ...
	I1207 12:26:09.393064    5447 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/no-preload-052000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/no-preload-052000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/no-preload-052000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:51:0b:00:af:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/no-preload-052000/disk.qcow2
	I1207 12:26:09.403238    5447 main.go:141] libmachine: STDOUT: 
	I1207 12:26:09.403315    5447 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:26:09.403407    5447 fix.go:56] fixHost completed within 29.839833ms
	I1207 12:26:09.403426    5447 start.go:83] releasing machines lock for "no-preload-052000", held for 29.979833ms
	W1207 12:26:09.403684    5447 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-052000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-052000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:26:09.411770    5447 out.go:177] 
	W1207 12:26:09.414811    5447 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:26:09.414855    5447 out.go:239] * 
	* 
	W1207 12:26:09.417410    5447 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:26:09.430624    5447 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-052000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000: exit status 7 (70.407333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-052000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-643000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-643000 -n old-k8s-version-643000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-643000 -n old-k8s-version-643000: exit status 7 (32.440125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-643000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-643000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-643000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-643000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.21675ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-643000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-643000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-643000 -n old-k8s-version-643000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-643000 -n old-k8s-version-643000: exit status 7 (34.977ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-643000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-643000 image list --format=json
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-643000 -n old-k8s-version-643000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-643000 -n old-k8s-version-643000: exit status 7 (31.288167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-643000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-643000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-643000 --alsologtostderr -v=1: exit status 89 (44.182792ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-643000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:26:04.449917    5464 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:26:04.450328    5464 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:26:04.450331    5464 out.go:309] Setting ErrFile to fd 2...
	I1207 12:26:04.450334    5464 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:26:04.450497    5464 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:26:04.450704    5464 out.go:303] Setting JSON to false
	I1207 12:26:04.450713    5464 mustload.go:65] Loading cluster: old-k8s-version-643000
	I1207 12:26:04.450891    5464 config.go:182] Loaded profile config "old-k8s-version-643000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1207 12:26:04.454746    5464 out.go:177] * The control plane node must be running for this command
	I1207 12:26:04.458858    5464 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-643000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-643000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-643000 -n old-k8s-version-643000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-643000 -n old-k8s-version-643000: exit status 7 (35.002916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-643000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-643000 -n old-k8s-version-643000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-643000 -n old-k8s-version-643000: exit status 7 (31.123375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-643000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-820000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-820000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (10.345312s)

                                                
                                                
-- stdout --
	* [embed-certs-820000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node embed-certs-820000 in cluster embed-certs-820000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-820000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:26:04.921103    5487 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:26:04.921249    5487 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:26:04.921251    5487 out.go:309] Setting ErrFile to fd 2...
	I1207 12:26:04.921254    5487 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:26:04.921373    5487 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:26:04.922442    5487 out.go:303] Setting JSON to false
	I1207 12:26:04.938482    5487 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3335,"bootTime":1701977429,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:26:04.938564    5487 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:26:04.943758    5487 out.go:177] * [embed-certs-820000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:26:04.948729    5487 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:26:04.953700    5487 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:26:04.948774    5487 notify.go:220] Checking for updates...
	I1207 12:26:04.959655    5487 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:26:04.962712    5487 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:26:04.965638    5487 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:26:04.968705    5487 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:26:04.972093    5487 config.go:182] Loaded profile config "multinode-554000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:26:04.972158    5487 config.go:182] Loaded profile config "no-preload-052000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.1
	I1207 12:26:04.972223    5487 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:26:04.976681    5487 out.go:177] * Using the qemu2 driver based on user configuration
	I1207 12:26:04.983735    5487 start.go:298] selected driver: qemu2
	I1207 12:26:04.983743    5487 start.go:902] validating driver "qemu2" against <nil>
	I1207 12:26:04.983750    5487 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:26:04.986114    5487 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 12:26:04.989678    5487 out.go:177] * Automatically selected the socket_vmnet network
	I1207 12:26:04.992758    5487 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 12:26:04.992796    5487 cni.go:84] Creating CNI manager for ""
	I1207 12:26:04.992803    5487 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 12:26:04.992809    5487 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1207 12:26:04.992814    5487 start_flags.go:323] config:
	{Name:embed-certs-820000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-820000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SS
HAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:26:04.997383    5487 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:26:05.004578    5487 out.go:177] * Starting control plane node embed-certs-820000 in cluster embed-certs-820000
	I1207 12:26:05.008730    5487 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 12:26:05.008746    5487 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1207 12:26:05.008758    5487 cache.go:56] Caching tarball of preloaded images
	I1207 12:26:05.008821    5487 preload.go:174] Found /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 12:26:05.008828    5487 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1207 12:26:05.008893    5487 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/embed-certs-820000/config.json ...
	I1207 12:26:05.008904    5487 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/embed-certs-820000/config.json: {Name:mk5263832998f5af3c00a27424390952884523f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:26:05.009114    5487 start.go:365] acquiring machines lock for embed-certs-820000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:26:05.009146    5487 start.go:369] acquired machines lock for "embed-certs-820000" in 26.208µs
	I1207 12:26:05.009157    5487 start.go:93] Provisioning new machine with config: &{Name:embed-certs-820000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-820000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:26:05.009212    5487 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:26:05.017676    5487 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1207 12:26:05.034537    5487 start.go:159] libmachine.API.Create for "embed-certs-820000" (driver="qemu2")
	I1207 12:26:05.034567    5487 client.go:168] LocalClient.Create starting
	I1207 12:26:05.034646    5487 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:26:05.034678    5487 main.go:141] libmachine: Decoding PEM data...
	I1207 12:26:05.034689    5487 main.go:141] libmachine: Parsing certificate...
	I1207 12:26:05.034726    5487 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:26:05.034750    5487 main.go:141] libmachine: Decoding PEM data...
	I1207 12:26:05.034758    5487 main.go:141] libmachine: Parsing certificate...
	I1207 12:26:05.035107    5487 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:26:05.161078    5487 main.go:141] libmachine: Creating SSH key...
	I1207 12:26:05.237477    5487 main.go:141] libmachine: Creating Disk image...
	I1207 12:26:05.237485    5487 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:26:05.237641    5487 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/embed-certs-820000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/embed-certs-820000/disk.qcow2
	I1207 12:26:05.249955    5487 main.go:141] libmachine: STDOUT: 
	I1207 12:26:05.249989    5487 main.go:141] libmachine: STDERR: 
	I1207 12:26:05.250040    5487 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/embed-certs-820000/disk.qcow2 +20000M
	I1207 12:26:05.260436    5487 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:26:05.260467    5487 main.go:141] libmachine: STDERR: 
	I1207 12:26:05.260488    5487 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/embed-certs-820000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/embed-certs-820000/disk.qcow2
	I1207 12:26:05.260494    5487 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:26:05.260527    5487 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/embed-certs-820000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/embed-certs-820000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/embed-certs-820000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:00:c5:d1:27:a4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/embed-certs-820000/disk.qcow2
	I1207 12:26:05.262265    5487 main.go:141] libmachine: STDOUT: 
	I1207 12:26:05.262298    5487 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:26:05.262320    5487 client.go:171] LocalClient.Create took 227.750083ms
	I1207 12:26:07.264548    5487 start.go:128] duration metric: createHost completed in 2.255346292s
	I1207 12:26:07.264617    5487 start.go:83] releasing machines lock for "embed-certs-820000", held for 2.255504041s
	W1207 12:26:07.264678    5487 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:26:07.278872    5487 out.go:177] * Deleting "embed-certs-820000" in qemu2 ...
	W1207 12:26:07.302245    5487 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:26:07.302278    5487 start.go:709] Will try again in 5 seconds ...
	I1207 12:26:12.304437    5487 start.go:365] acquiring machines lock for embed-certs-820000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:26:12.814536    5487 start.go:369] acquired machines lock for "embed-certs-820000" in 509.985792ms
	I1207 12:26:12.814681    5487 start.go:93] Provisioning new machine with config: &{Name:embed-certs-820000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-820000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:26:12.814891    5487 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:26:12.829488    5487 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1207 12:26:12.877113    5487 start.go:159] libmachine.API.Create for "embed-certs-820000" (driver="qemu2")
	I1207 12:26:12.877154    5487 client.go:168] LocalClient.Create starting
	I1207 12:26:12.877279    5487 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:26:12.877342    5487 main.go:141] libmachine: Decoding PEM data...
	I1207 12:26:12.877359    5487 main.go:141] libmachine: Parsing certificate...
	I1207 12:26:12.877437    5487 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:26:12.877478    5487 main.go:141] libmachine: Decoding PEM data...
	I1207 12:26:12.877490    5487 main.go:141] libmachine: Parsing certificate...
	I1207 12:26:12.878134    5487 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:26:13.025547    5487 main.go:141] libmachine: Creating SSH key...
	I1207 12:26:13.159267    5487 main.go:141] libmachine: Creating Disk image...
	I1207 12:26:13.159276    5487 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:26:13.159476    5487 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/embed-certs-820000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/embed-certs-820000/disk.qcow2
	I1207 12:26:13.171639    5487 main.go:141] libmachine: STDOUT: 
	I1207 12:26:13.171663    5487 main.go:141] libmachine: STDERR: 
	I1207 12:26:13.171712    5487 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/embed-certs-820000/disk.qcow2 +20000M
	I1207 12:26:13.182127    5487 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:26:13.182143    5487 main.go:141] libmachine: STDERR: 
	I1207 12:26:13.182158    5487 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/embed-certs-820000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/embed-certs-820000/disk.qcow2
	I1207 12:26:13.182164    5487 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:26:13.182207    5487 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/embed-certs-820000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/embed-certs-820000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/embed-certs-820000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:6b:8f:6d:6b:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/embed-certs-820000/disk.qcow2
	I1207 12:26:13.183836    5487 main.go:141] libmachine: STDOUT: 
	I1207 12:26:13.183853    5487 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:26:13.183866    5487 client.go:171] LocalClient.Create took 306.711125ms
	I1207 12:26:15.186070    5487 start.go:128] duration metric: createHost completed in 2.371191417s
	I1207 12:26:15.186131    5487 start.go:83] releasing machines lock for "embed-certs-820000", held for 2.37161725s
	W1207 12:26:15.186557    5487 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-820000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-820000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:26:15.203118    5487 out.go:177] 
	W1207 12:26:15.208282    5487 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:26:15.208316    5487 out.go:239] * 
	* 
	W1207 12:26:15.210698    5487 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:26:15.221116    5487 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-820000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-820000 -n embed-certs-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-820000 -n embed-certs-820000: exit status 7 (67.111291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-820000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-052000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000: exit status 7 (33.753042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-052000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-052000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-052000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-052000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.525375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-052000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-052000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000: exit status 7 (31.323208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-052000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-052000 image list --format=json
start_stop_delete_test.go:304: v1.29.0-rc.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.10-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.0-rc.1",
- 	"registry.k8s.io/kube-controller-manager:v1.29.0-rc.1",
- 	"registry.k8s.io/kube-proxy:v1.29.0-rc.1",
- 	"registry.k8s.io/kube-scheduler:v1.29.0-rc.1",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000: exit status 7 (31.388292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-052000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-052000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-052000 --alsologtostderr -v=1: exit status 89 (40.773459ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-052000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:26:09.709811    5511 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:26:09.709976    5511 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:26:09.709979    5511 out.go:309] Setting ErrFile to fd 2...
	I1207 12:26:09.709981    5511 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:26:09.710118    5511 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:26:09.710334    5511 out.go:303] Setting JSON to false
	I1207 12:26:09.710342    5511 mustload.go:65] Loading cluster: no-preload-052000
	I1207 12:26:09.710561    5511 config.go:182] Loaded profile config "no-preload-052000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.1
	I1207 12:26:09.714731    5511 out.go:177] * The control plane node must be running for this command
	I1207 12:26:09.718739    5511 out.go:177]   To start a cluster, run: "minikube start -p no-preload-052000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-052000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000: exit status 7 (30.698792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-052000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000: exit status 7 (30.963125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-052000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-986000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-986000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (9.976968958s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-986000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node default-k8s-diff-port-986000 in cluster default-k8s-diff-port-986000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-986000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:26:10.398757    5546 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:26:10.398932    5546 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:26:10.398935    5546 out.go:309] Setting ErrFile to fd 2...
	I1207 12:26:10.398938    5546 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:26:10.399074    5546 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:26:10.400123    5546 out.go:303] Setting JSON to false
	I1207 12:26:10.416032    5546 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3341,"bootTime":1701977429,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:26:10.416103    5546 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:26:10.420957    5546 out.go:177] * [default-k8s-diff-port-986000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:26:10.427918    5546 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:26:10.427972    5546 notify.go:220] Checking for updates...
	I1207 12:26:10.431924    5546 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:26:10.435888    5546 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:26:10.438919    5546 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:26:10.441934    5546 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:26:10.444818    5546 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:26:10.448260    5546 config.go:182] Loaded profile config "embed-certs-820000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:26:10.448321    5546 config.go:182] Loaded profile config "multinode-554000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:26:10.448364    5546 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:26:10.452819    5546 out.go:177] * Using the qemu2 driver based on user configuration
	I1207 12:26:10.459860    5546 start.go:298] selected driver: qemu2
	I1207 12:26:10.459871    5546 start.go:902] validating driver "qemu2" against <nil>
	I1207 12:26:10.459877    5546 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:26:10.462290    5546 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 12:26:10.464842    5546 out.go:177] * Automatically selected the socket_vmnet network
	I1207 12:26:10.467966    5546 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 12:26:10.468021    5546 cni.go:84] Creating CNI manager for ""
	I1207 12:26:10.468029    5546 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 12:26:10.468035    5546 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1207 12:26:10.468042    5546 start_flags.go:323] config:
	{Name:default-k8s-diff-port-986000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-986000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stat
icIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:26:10.472601    5546 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:26:10.479936    5546 out.go:177] * Starting control plane node default-k8s-diff-port-986000 in cluster default-k8s-diff-port-986000
	I1207 12:26:10.483891    5546 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 12:26:10.483907    5546 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1207 12:26:10.483919    5546 cache.go:56] Caching tarball of preloaded images
	I1207 12:26:10.483983    5546 preload.go:174] Found /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 12:26:10.483989    5546 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1207 12:26:10.484061    5546 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/default-k8s-diff-port-986000/config.json ...
	I1207 12:26:10.484079    5546 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/default-k8s-diff-port-986000/config.json: {Name:mk482fcbf6e01ccf8cbbfb9f60f7c5cc600f16a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:26:10.484286    5546 start.go:365] acquiring machines lock for default-k8s-diff-port-986000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:26:10.484322    5546 start.go:369] acquired machines lock for "default-k8s-diff-port-986000" in 29.416µs
	I1207 12:26:10.484335    5546 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-986000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-986000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:26:10.484367    5546 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:26:10.492828    5546 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1207 12:26:10.510217    5546 start.go:159] libmachine.API.Create for "default-k8s-diff-port-986000" (driver="qemu2")
	I1207 12:26:10.510248    5546 client.go:168] LocalClient.Create starting
	I1207 12:26:10.510303    5546 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:26:10.510336    5546 main.go:141] libmachine: Decoding PEM data...
	I1207 12:26:10.510345    5546 main.go:141] libmachine: Parsing certificate...
	I1207 12:26:10.510390    5546 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:26:10.510412    5546 main.go:141] libmachine: Decoding PEM data...
	I1207 12:26:10.510421    5546 main.go:141] libmachine: Parsing certificate...
	I1207 12:26:10.510817    5546 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:26:10.636484    5546 main.go:141] libmachine: Creating SSH key...
	I1207 12:26:10.787010    5546 main.go:141] libmachine: Creating Disk image...
	I1207 12:26:10.787018    5546 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:26:10.787200    5546 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/default-k8s-diff-port-986000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/default-k8s-diff-port-986000/disk.qcow2
	I1207 12:26:10.799961    5546 main.go:141] libmachine: STDOUT: 
	I1207 12:26:10.799984    5546 main.go:141] libmachine: STDERR: 
	I1207 12:26:10.800047    5546 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/default-k8s-diff-port-986000/disk.qcow2 +20000M
	I1207 12:26:10.810427    5546 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:26:10.810444    5546 main.go:141] libmachine: STDERR: 
	I1207 12:26:10.810458    5546 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/default-k8s-diff-port-986000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/default-k8s-diff-port-986000/disk.qcow2
	I1207 12:26:10.810465    5546 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:26:10.810502    5546 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/default-k8s-diff-port-986000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/default-k8s-diff-port-986000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/default-k8s-diff-port-986000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:57:ea:ed:89:8b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/default-k8s-diff-port-986000/disk.qcow2
	I1207 12:26:10.812152    5546 main.go:141] libmachine: STDOUT: 
	I1207 12:26:10.812169    5546 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:26:10.812187    5546 client.go:171] LocalClient.Create took 301.93625ms
	I1207 12:26:12.814328    5546 start.go:128] duration metric: createHost completed in 2.329987958s
	I1207 12:26:12.814421    5546 start.go:83] releasing machines lock for "default-k8s-diff-port-986000", held for 2.330099459s
	W1207 12:26:12.814473    5546 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:26:12.839515    5546 out.go:177] * Deleting "default-k8s-diff-port-986000" in qemu2 ...
	W1207 12:26:12.858916    5546 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:26:12.858937    5546 start.go:709] Will try again in 5 seconds ...
	I1207 12:26:17.861057    5546 start.go:365] acquiring machines lock for default-k8s-diff-port-986000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:26:17.861547    5546 start.go:369] acquired machines lock for "default-k8s-diff-port-986000" in 377.625µs
	I1207 12:26:17.861675    5546 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-986000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-986000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:26:17.861969    5546 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:26:17.870549    5546 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1207 12:26:17.919528    5546 start.go:159] libmachine.API.Create for "default-k8s-diff-port-986000" (driver="qemu2")
	I1207 12:26:17.919586    5546 client.go:168] LocalClient.Create starting
	I1207 12:26:17.919700    5546 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:26:17.919798    5546 main.go:141] libmachine: Decoding PEM data...
	I1207 12:26:17.919818    5546 main.go:141] libmachine: Parsing certificate...
	I1207 12:26:17.919886    5546 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:26:17.919920    5546 main.go:141] libmachine: Decoding PEM data...
	I1207 12:26:17.919941    5546 main.go:141] libmachine: Parsing certificate...
	I1207 12:26:17.920605    5546 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:26:18.059412    5546 main.go:141] libmachine: Creating SSH key...
	I1207 12:26:18.276857    5546 main.go:141] libmachine: Creating Disk image...
	I1207 12:26:18.276865    5546 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:26:18.277096    5546 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/default-k8s-diff-port-986000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/default-k8s-diff-port-986000/disk.qcow2
	I1207 12:26:18.290063    5546 main.go:141] libmachine: STDOUT: 
	I1207 12:26:18.290085    5546 main.go:141] libmachine: STDERR: 
	I1207 12:26:18.290150    5546 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/default-k8s-diff-port-986000/disk.qcow2 +20000M
	I1207 12:26:18.300454    5546 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:26:18.300470    5546 main.go:141] libmachine: STDERR: 
	I1207 12:26:18.300493    5546 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/default-k8s-diff-port-986000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/default-k8s-diff-port-986000/disk.qcow2
	I1207 12:26:18.300499    5546 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:26:18.300555    5546 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/default-k8s-diff-port-986000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/default-k8s-diff-port-986000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/default-k8s-diff-port-986000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:7f:46:99:05:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/default-k8s-diff-port-986000/disk.qcow2
	I1207 12:26:18.302260    5546 main.go:141] libmachine: STDOUT: 
	I1207 12:26:18.302278    5546 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:26:18.302290    5546 client.go:171] LocalClient.Create took 382.705083ms
	I1207 12:26:20.304431    5546 start.go:128] duration metric: createHost completed in 2.442479917s
	I1207 12:26:20.304485    5546 start.go:83] releasing machines lock for "default-k8s-diff-port-986000", held for 2.442954417s
	W1207 12:26:20.304931    5546 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-986000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-986000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:26:20.313520    5546 out.go:177] 
	W1207 12:26:20.320628    5546 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:26:20.320757    5546 out.go:239] * 
	* 
	W1207 12:26:20.323660    5546 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:26:20.331567    5546 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-986000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000: exit status 7 (68.207ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-986000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-820000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-820000 create -f testdata/busybox.yaml: exit status 1 (28.453541ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-820000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-820000 -n embed-certs-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-820000 -n embed-certs-820000: exit status 7 (31.186542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-820000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-820000 -n embed-certs-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-820000 -n embed-certs-820000: exit status 7 (31.214667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-820000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-820000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-820000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-820000 describe deploy/metrics-server -n kube-system: exit status 1 (25.647958ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-820000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-820000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-820000 -n embed-certs-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-820000 -n embed-certs-820000: exit status 7 (31.099458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-820000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-820000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-820000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (5.172192833s)

                                                
                                                
-- stdout --
	* [embed-certs-820000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node embed-certs-820000 in cluster embed-certs-820000
	* Restarting existing qemu2 VM for "embed-certs-820000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-820000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:26:15.704236    5580 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:26:15.704389    5580 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:26:15.704393    5580 out.go:309] Setting ErrFile to fd 2...
	I1207 12:26:15.704400    5580 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:26:15.704515    5580 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:26:15.705487    5580 out.go:303] Setting JSON to false
	I1207 12:26:15.721347    5580 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3346,"bootTime":1701977429,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:26:15.721435    5580 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:26:15.724938    5580 out.go:177] * [embed-certs-820000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:26:15.731881    5580 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:26:15.736881    5580 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:26:15.731972    5580 notify.go:220] Checking for updates...
	I1207 12:26:15.742849    5580 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:26:15.745898    5580 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:26:15.748882    5580 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:26:15.751848    5580 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:26:15.755194    5580 config.go:182] Loaded profile config "embed-certs-820000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:26:15.755457    5580 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:26:15.759854    5580 out.go:177] * Using the qemu2 driver based on existing profile
	I1207 12:26:15.766921    5580 start.go:298] selected driver: qemu2
	I1207 12:26:15.766928    5580 start.go:902] validating driver "qemu2" against &{Name:embed-certs-820000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-820000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:26:15.766986    5580 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:26:15.769185    5580 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 12:26:15.769223    5580 cni.go:84] Creating CNI manager for ""
	I1207 12:26:15.769229    5580 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 12:26:15.769233    5580 start_flags.go:323] config:
	{Name:embed-certs-820000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-820000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:26:15.773282    5580 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:26:15.780892    5580 out.go:177] * Starting control plane node embed-certs-820000 in cluster embed-certs-820000
	I1207 12:26:15.784910    5580 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 12:26:15.784925    5580 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1207 12:26:15.784936    5580 cache.go:56] Caching tarball of preloaded images
	I1207 12:26:15.785005    5580 preload.go:174] Found /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 12:26:15.785010    5580 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1207 12:26:15.785078    5580 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/embed-certs-820000/config.json ...
	I1207 12:26:15.785593    5580 start.go:365] acquiring machines lock for embed-certs-820000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:26:15.785623    5580 start.go:369] acquired machines lock for "embed-certs-820000" in 24.208µs
	I1207 12:26:15.785631    5580 start.go:96] Skipping create...Using existing machine configuration
	I1207 12:26:15.785636    5580 fix.go:54] fixHost starting: 
	I1207 12:26:15.785744    5580 fix.go:102] recreateIfNeeded on embed-certs-820000: state=Stopped err=<nil>
	W1207 12:26:15.785752    5580 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 12:26:15.789821    5580 out.go:177] * Restarting existing qemu2 VM for "embed-certs-820000" ...
	I1207 12:26:15.797850    5580 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/embed-certs-820000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/embed-certs-820000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/embed-certs-820000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:6b:8f:6d:6b:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/embed-certs-820000/disk.qcow2
	I1207 12:26:15.799851    5580 main.go:141] libmachine: STDOUT: 
	I1207 12:26:15.799873    5580 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:26:15.799901    5580 fix.go:56] fixHost completed within 14.2625ms
	I1207 12:26:15.799904    5580 start.go:83] releasing machines lock for "embed-certs-820000", held for 14.277875ms
	W1207 12:26:15.799925    5580 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:26:15.799951    5580 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:26:15.799955    5580 start.go:709] Will try again in 5 seconds ...
	I1207 12:26:20.801363    5580 start.go:365] acquiring machines lock for embed-certs-820000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:26:20.801440    5580 start.go:369] acquired machines lock for "embed-certs-820000" in 48.833µs
	I1207 12:26:20.801453    5580 start.go:96] Skipping create...Using existing machine configuration
	I1207 12:26:20.801456    5580 fix.go:54] fixHost starting: 
	I1207 12:26:20.801598    5580 fix.go:102] recreateIfNeeded on embed-certs-820000: state=Stopped err=<nil>
	W1207 12:26:20.801603    5580 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 12:26:20.809792    5580 out.go:177] * Restarting existing qemu2 VM for "embed-certs-820000" ...
	I1207 12:26:20.813805    5580 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/embed-certs-820000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/embed-certs-820000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/embed-certs-820000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:6b:8f:6d:6b:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/embed-certs-820000/disk.qcow2
	I1207 12:26:20.816022    5580 main.go:141] libmachine: STDOUT: 
	I1207 12:26:20.816040    5580 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:26:20.816059    5580 fix.go:56] fixHost completed within 14.602416ms
	I1207 12:26:20.816063    5580 start.go:83] releasing machines lock for "embed-certs-820000", held for 14.61775ms
	W1207 12:26:20.816109    5580 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-820000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-820000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:26:20.823688    5580 out.go:177] 
	W1207 12:26:20.826852    5580 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:26:20.826866    5580 out.go:239] * 
	* 
	W1207 12:26:20.827412    5580 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:26:20.838774    5580 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-820000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-820000 -n embed-certs-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-820000 -n embed-certs-820000: exit status 7 (32.365292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-820000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-986000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-986000 create -f testdata/busybox.yaml: exit status 1 (28.340125ms)

                                                
                                                
** stderr ** 
	error: no openapi getter

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-986000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000: exit status 7 (31.262083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-986000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000: exit status 7 (30.5835ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-986000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-986000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-986000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-986000 describe deploy/metrics-server -n kube-system: exit status 1 (25.761584ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-986000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-986000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000: exit status 7 (31.667375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-986000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-986000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-986000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (5.204368792s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-986000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-986000 in cluster default-k8s-diff-port-986000
	* Restarting existing qemu2 VM for "default-k8s-diff-port-986000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-986000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:26:20.819958    5609 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:26:20.823721    5609 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:26:20.823729    5609 out.go:309] Setting ErrFile to fd 2...
	I1207 12:26:20.823740    5609 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:26:20.823866    5609 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:26:20.826969    5609 out.go:303] Setting JSON to false
	I1207 12:26:20.843082    5609 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3351,"bootTime":1701977429,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:26:20.843144    5609 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:26:20.849716    5609 out.go:177] * [default-k8s-diff-port-986000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:26:20.859727    5609 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:26:20.855848    5609 notify.go:220] Checking for updates...
	I1207 12:26:20.866746    5609 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:26:20.869807    5609 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:26:20.872753    5609 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:26:20.873940    5609 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:26:20.876783    5609 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:26:20.880028    5609 config.go:182] Loaded profile config "default-k8s-diff-port-986000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:26:20.880294    5609 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:26:20.883668    5609 out.go:177] * Using the qemu2 driver based on existing profile
	I1207 12:26:20.890720    5609 start.go:298] selected driver: qemu2
	I1207 12:26:20.890728    5609 start.go:902] validating driver "qemu2" against &{Name:default-k8s-diff-port-986000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-986000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subne
t: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:26:20.890781    5609 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:26:20.893232    5609 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1207 12:26:20.893281    5609 cni.go:84] Creating CNI manager for ""
	I1207 12:26:20.893289    5609 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 12:26:20.893293    5609 start_flags.go:323] config:
	{Name:default-k8s-diff-port-986000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-9860
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:26:20.897684    5609 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:26:20.901811    5609 out.go:177] * Starting control plane node default-k8s-diff-port-986000 in cluster default-k8s-diff-port-986000
	I1207 12:26:20.909709    5609 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 12:26:20.909725    5609 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1207 12:26:20.909733    5609 cache.go:56] Caching tarball of preloaded images
	I1207 12:26:20.909783    5609 preload.go:174] Found /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 12:26:20.909788    5609 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1207 12:26:20.909838    5609 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/default-k8s-diff-port-986000/config.json ...
	I1207 12:26:20.910197    5609 start.go:365] acquiring machines lock for default-k8s-diff-port-986000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:26:20.910226    5609 start.go:369] acquired machines lock for "default-k8s-diff-port-986000" in 20.834µs
	I1207 12:26:20.910232    5609 start.go:96] Skipping create...Using existing machine configuration
	I1207 12:26:20.910238    5609 fix.go:54] fixHost starting: 
	I1207 12:26:20.910344    5609 fix.go:102] recreateIfNeeded on default-k8s-diff-port-986000: state=Stopped err=<nil>
	W1207 12:26:20.910352    5609 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 12:26:20.916851    5609 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-986000" ...
	I1207 12:26:20.924779    5609 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/default-k8s-diff-port-986000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/default-k8s-diff-port-986000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/default-k8s-diff-port-986000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:7f:46:99:05:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/default-k8s-diff-port-986000/disk.qcow2
	I1207 12:26:20.926717    5609 main.go:141] libmachine: STDOUT: 
	I1207 12:26:20.926735    5609 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:26:20.926770    5609 fix.go:56] fixHost completed within 16.530416ms
	I1207 12:26:20.926774    5609 start.go:83] releasing machines lock for "default-k8s-diff-port-986000", held for 16.545167ms
	W1207 12:26:20.926780    5609 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:26:20.926819    5609 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:26:20.926823    5609 start.go:709] Will try again in 5 seconds ...
	I1207 12:26:25.928993    5609 start.go:365] acquiring machines lock for default-k8s-diff-port-986000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:26:25.929531    5609 start.go:369] acquired machines lock for "default-k8s-diff-port-986000" in 353.5µs
	I1207 12:26:25.929696    5609 start.go:96] Skipping create...Using existing machine configuration
	I1207 12:26:25.929725    5609 fix.go:54] fixHost starting: 
	I1207 12:26:25.930576    5609 fix.go:102] recreateIfNeeded on default-k8s-diff-port-986000: state=Stopped err=<nil>
	W1207 12:26:25.930608    5609 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 12:26:25.940289    5609 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-986000" ...
	I1207 12:26:25.944467    5609 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/default-k8s-diff-port-986000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/default-k8s-diff-port-986000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/default-k8s-diff-port-986000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:7f:46:99:05:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/default-k8s-diff-port-986000/disk.qcow2
	I1207 12:26:25.953851    5609 main.go:141] libmachine: STDOUT: 
	I1207 12:26:25.953936    5609 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:26:25.954125    5609 fix.go:56] fixHost completed within 24.399625ms
	I1207 12:26:25.954146    5609 start.go:83] releasing machines lock for "default-k8s-diff-port-986000", held for 24.548667ms
	W1207 12:26:25.954338    5609 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-986000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-986000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:26:25.963232    5609 out.go:177] 
	W1207 12:26:25.967392    5609 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:26:25.967431    5609 out.go:239] * 
	* 
	W1207 12:26:25.969902    5609 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:26:25.981209    5609 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-986000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000: exit status 7 (71.204083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-986000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-820000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-820000 -n embed-certs-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-820000 -n embed-certs-820000: exit status 7 (37.130666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-820000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-820000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-820000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-820000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.184417ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-820000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-820000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-820000 -n embed-certs-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-820000 -n embed-certs-820000: exit status 7 (31.430583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-820000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-820000 image list --format=json
start_stop_delete_test.go:304: v1.28.4 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.4",
- 	"registry.k8s.io/kube-controller-manager:v1.28.4",
- 	"registry.k8s.io/kube-proxy:v1.28.4",
- 	"registry.k8s.io/kube-scheduler:v1.28.4",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-820000 -n embed-certs-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-820000 -n embed-certs-820000: exit status 7 (31.182917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-820000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-820000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-820000 --alsologtostderr -v=1: exit status 89 (42.248083ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-820000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:26:21.076554    5628 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:26:21.076752    5628 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:26:21.076754    5628 out.go:309] Setting ErrFile to fd 2...
	I1207 12:26:21.076757    5628 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:26:21.076875    5628 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:26:21.077076    5628 out.go:303] Setting JSON to false
	I1207 12:26:21.077086    5628 mustload.go:65] Loading cluster: embed-certs-820000
	I1207 12:26:21.077268    5628 config.go:182] Loaded profile config "embed-certs-820000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:26:21.081755    5628 out.go:177] * The control plane node must be running for this command
	I1207 12:26:21.085934    5628 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-820000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-820000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-820000 -n embed-certs-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-820000 -n embed-certs-820000: exit status 7 (30.433209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-820000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-820000 -n embed-certs-820000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-820000 -n embed-certs-820000: exit status 7 (31.29775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-820000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-049000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-049000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.1: exit status 80 (9.7341325s)

                                                
                                                
-- stdout --
	* [newest-cni-049000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node newest-cni-049000 in cluster newest-cni-049000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-049000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:26:21.549327    5651 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:26:21.549466    5651 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:26:21.549469    5651 out.go:309] Setting ErrFile to fd 2...
	I1207 12:26:21.549471    5651 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:26:21.549590    5651 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:26:21.550693    5651 out.go:303] Setting JSON to false
	I1207 12:26:21.566581    5651 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3352,"bootTime":1701977429,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:26:21.566658    5651 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:26:21.575780    5651 out.go:177] * [newest-cni-049000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:26:21.579824    5651 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:26:21.583828    5651 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:26:21.579875    5651 notify.go:220] Checking for updates...
	I1207 12:26:21.589788    5651 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:26:21.592872    5651 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:26:21.595784    5651 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:26:21.598805    5651 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:26:21.602113    5651 config.go:182] Loaded profile config "default-k8s-diff-port-986000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:26:21.602176    5651 config.go:182] Loaded profile config "multinode-554000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:26:21.602221    5651 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:26:21.605716    5651 out.go:177] * Using the qemu2 driver based on user configuration
	I1207 12:26:21.612782    5651 start.go:298] selected driver: qemu2
	I1207 12:26:21.612791    5651 start.go:902] validating driver "qemu2" against <nil>
	I1207 12:26:21.612798    5651 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:26:21.615075    5651 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	W1207 12:26:21.615096    5651 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1207 12:26:21.622808    5651 out.go:177] * Automatically selected the socket_vmnet network
	I1207 12:26:21.625861    5651 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1207 12:26:21.625895    5651 cni.go:84] Creating CNI manager for ""
	I1207 12:26:21.625904    5651 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 12:26:21.625909    5651 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1207 12:26:21.625914    5651 start_flags.go:323] config:
	{Name:newest-cni-049000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:newest-cni-049000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/
bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:26:21.630492    5651 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:26:21.637767    5651 out.go:177] * Starting control plane node newest-cni-049000 in cluster newest-cni-049000
	I1207 12:26:21.641870    5651 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime docker
	I1207 12:26:21.641887    5651 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4
	I1207 12:26:21.641898    5651 cache.go:56] Caching tarball of preloaded images
	I1207 12:26:21.641960    5651 preload.go:174] Found /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 12:26:21.641967    5651 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.1 on docker
	I1207 12:26:21.642033    5651 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/newest-cni-049000/config.json ...
	I1207 12:26:21.642050    5651 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/newest-cni-049000/config.json: {Name:mkd9bffcc3dd9979dd26afb43456a7870f67304f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:26:21.642274    5651 start.go:365] acquiring machines lock for newest-cni-049000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:26:21.642306    5651 start.go:369] acquired machines lock for "newest-cni-049000" in 27.042µs
	I1207 12:26:21.642319    5651 start.go:93] Provisioning new machine with config: &{Name:newest-cni-049000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:newest-cni-049000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:26:21.642350    5651 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:26:21.650777    5651 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1207 12:26:21.668332    5651 start.go:159] libmachine.API.Create for "newest-cni-049000" (driver="qemu2")
	I1207 12:26:21.668377    5651 client.go:168] LocalClient.Create starting
	I1207 12:26:21.668457    5651 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:26:21.668489    5651 main.go:141] libmachine: Decoding PEM data...
	I1207 12:26:21.668516    5651 main.go:141] libmachine: Parsing certificate...
	I1207 12:26:21.668556    5651 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:26:21.668580    5651 main.go:141] libmachine: Decoding PEM data...
	I1207 12:26:21.668589    5651 main.go:141] libmachine: Parsing certificate...
	I1207 12:26:21.668977    5651 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:26:21.802535    5651 main.go:141] libmachine: Creating SSH key...
	I1207 12:26:21.883740    5651 main.go:141] libmachine: Creating Disk image...
	I1207 12:26:21.883750    5651 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:26:21.883939    5651 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/newest-cni-049000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/newest-cni-049000/disk.qcow2
	I1207 12:26:21.895895    5651 main.go:141] libmachine: STDOUT: 
	I1207 12:26:21.895965    5651 main.go:141] libmachine: STDERR: 
	I1207 12:26:21.896021    5651 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/newest-cni-049000/disk.qcow2 +20000M
	I1207 12:26:21.906327    5651 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:26:21.906394    5651 main.go:141] libmachine: STDERR: 
	I1207 12:26:21.906405    5651 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/newest-cni-049000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/newest-cni-049000/disk.qcow2
	I1207 12:26:21.906410    5651 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:26:21.906442    5651 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/newest-cni-049000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/newest-cni-049000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/newest-cni-049000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:38:b0:f6:b2:42 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/newest-cni-049000/disk.qcow2
	I1207 12:26:21.908154    5651 main.go:141] libmachine: STDOUT: 
	I1207 12:26:21.908206    5651 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:26:21.908229    5651 client.go:171] LocalClient.Create took 239.85ms
	I1207 12:26:23.910407    5651 start.go:128] duration metric: createHost completed in 2.268073042s
	I1207 12:26:23.910491    5651 start.go:83] releasing machines lock for "newest-cni-049000", held for 2.268216708s
	W1207 12:26:23.910536    5651 start.go:694] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:26:23.923869    5651 out.go:177] * Deleting "newest-cni-049000" in qemu2 ...
	W1207 12:26:23.948212    5651 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:26:23.948246    5651 start.go:709] Will try again in 5 seconds ...
	I1207 12:26:28.950317    5651 start.go:365] acquiring machines lock for newest-cni-049000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:26:28.950606    5651 start.go:369] acquired machines lock for "newest-cni-049000" in 227.417µs
	I1207 12:26:28.950697    5651 start.go:93] Provisioning new machine with config: &{Name:newest-cni-049000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:newest-cni-049000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1207 12:26:28.950847    5651 start.go:125] createHost starting for "" (driver="qemu2")
	I1207 12:26:28.960300    5651 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1207 12:26:29.006283    5651 start.go:159] libmachine.API.Create for "newest-cni-049000" (driver="qemu2")
	I1207 12:26:29.006355    5651 client.go:168] LocalClient.Create starting
	I1207 12:26:29.006472    5651 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/ca.pem
	I1207 12:26:29.006550    5651 main.go:141] libmachine: Decoding PEM data...
	I1207 12:26:29.006576    5651 main.go:141] libmachine: Parsing certificate...
	I1207 12:26:29.006637    5651 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17719-1328/.minikube/certs/cert.pem
	I1207 12:26:29.006685    5651 main.go:141] libmachine: Decoding PEM data...
	I1207 12:26:29.006700    5651 main.go:141] libmachine: Parsing certificate...
	I1207 12:26:29.007472    5651 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso...
	I1207 12:26:29.146575    5651 main.go:141] libmachine: Creating SSH key...
	I1207 12:26:29.185127    5651 main.go:141] libmachine: Creating Disk image...
	I1207 12:26:29.185132    5651 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1207 12:26:29.185299    5651 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/newest-cni-049000/disk.qcow2.raw /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/newest-cni-049000/disk.qcow2
	I1207 12:26:29.197245    5651 main.go:141] libmachine: STDOUT: 
	I1207 12:26:29.197310    5651 main.go:141] libmachine: STDERR: 
	I1207 12:26:29.197381    5651 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/newest-cni-049000/disk.qcow2 +20000M
	I1207 12:26:29.208169    5651 main.go:141] libmachine: STDOUT: Image resized.
	
	I1207 12:26:29.208211    5651 main.go:141] libmachine: STDERR: 
	I1207 12:26:29.208223    5651 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/newest-cni-049000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/newest-cni-049000/disk.qcow2
	I1207 12:26:29.208238    5651 main.go:141] libmachine: Starting QEMU VM...
	I1207 12:26:29.208274    5651 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/newest-cni-049000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/newest-cni-049000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/newest-cni-049000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:9c:0a:73:68:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/newest-cni-049000/disk.qcow2
	I1207 12:26:29.209935    5651 main.go:141] libmachine: STDOUT: 
	I1207 12:26:29.209970    5651 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:26:29.209984    5651 client.go:171] LocalClient.Create took 203.6275ms
	I1207 12:26:31.212178    5651 start.go:128] duration metric: createHost completed in 2.261332875s
	I1207 12:26:31.212287    5651 start.go:83] releasing machines lock for "newest-cni-049000", held for 2.261704625s
	W1207 12:26:31.212712    5651 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-049000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-049000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:26:31.221525    5651 out.go:177] 
	W1207 12:26:31.227542    5651 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:26:31.227569    5651 out.go:239] * 
	* 
	W1207 12:26:31.230490    5651 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:26:31.242406    5651 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-049000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-049000 -n newest-cni-049000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-049000 -n newest-cni-049000: exit status 7 (72.185166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-049000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-986000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000: exit status 7 (33.169583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-986000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-986000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-986000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-986000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.07725ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-986000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-986000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000: exit status 7 (31.155625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-986000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-986000 image list --format=json
start_stop_delete_test.go:304: v1.28.4 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.4",
- 	"registry.k8s.io/kube-controller-manager:v1.28.4",
- 	"registry.k8s.io/kube-proxy:v1.28.4",
- 	"registry.k8s.io/kube-scheduler:v1.28.4",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000: exit status 7 (31.209209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-986000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-986000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-986000 --alsologtostderr -v=1: exit status 89 (43.216208ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-986000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:26:26.261396    5676 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:26:26.261580    5676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:26:26.261583    5676 out.go:309] Setting ErrFile to fd 2...
	I1207 12:26:26.261585    5676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:26:26.261720    5676 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:26:26.261948    5676 out.go:303] Setting JSON to false
	I1207 12:26:26.261958    5676 mustload.go:65] Loading cluster: default-k8s-diff-port-986000
	I1207 12:26:26.262163    5676 config.go:182] Loaded profile config "default-k8s-diff-port-986000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:26:26.266325    5676 out.go:177] * The control plane node must be running for this command
	I1207 12:26:26.270515    5676 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-986000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-986000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000: exit status 7 (30.670041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-986000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000: exit status 7 (31.174042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-986000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-049000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-049000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.1: exit status 80 (5.194639167s)

                                                
                                                
-- stdout --
	* [newest-cni-049000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node newest-cni-049000 in cluster newest-cni-049000
	* Restarting existing qemu2 VM for "newest-cni-049000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-049000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:26:31.584257    5719 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:26:31.584411    5719 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:26:31.584414    5719 out.go:309] Setting ErrFile to fd 2...
	I1207 12:26:31.584416    5719 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:26:31.584553    5719 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:26:31.585478    5719 out.go:303] Setting JSON to false
	I1207 12:26:31.601285    5719 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":3362,"bootTime":1701977429,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:26:31.601353    5719 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:26:31.606540    5719 out.go:177] * [newest-cni-049000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:26:31.613577    5719 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:26:31.613632    5719 notify.go:220] Checking for updates...
	I1207 12:26:31.621501    5719 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:26:31.624505    5719 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:26:31.627481    5719 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:26:31.630571    5719 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:26:31.633522    5719 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:26:31.636847    5719 config.go:182] Loaded profile config "newest-cni-049000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.1
	I1207 12:26:31.637097    5719 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:26:31.641440    5719 out.go:177] * Using the qemu2 driver based on existing profile
	I1207 12:26:31.648483    5719 start.go:298] selected driver: qemu2
	I1207 12:26:31.648490    5719 start.go:902] validating driver "qemu2" against &{Name:newest-cni-049000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:newest-cni-049000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s Schedu
ledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:26:31.648553    5719 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:26:31.650936    5719 start_flags.go:950] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1207 12:26:31.650983    5719 cni.go:84] Creating CNI manager for ""
	I1207 12:26:31.650990    5719 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 12:26:31.650995    5719 start_flags.go:323] config:
	{Name:newest-cni-049000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:newest-cni-049000 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: Multi
NodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:26:31.655322    5719 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:26:31.663312    5719 out.go:177] * Starting control plane node newest-cni-049000 in cluster newest-cni-049000
	I1207 12:26:31.667481    5719 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime docker
	I1207 12:26:31.667498    5719 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4
	I1207 12:26:31.667511    5719 cache.go:56] Caching tarball of preloaded images
	I1207 12:26:31.667578    5719 preload.go:174] Found /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1207 12:26:31.667584    5719 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.1 on docker
	I1207 12:26:31.667669    5719 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/newest-cni-049000/config.json ...
	I1207 12:26:31.668167    5719 start.go:365] acquiring machines lock for newest-cni-049000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:26:31.668195    5719 start.go:369] acquired machines lock for "newest-cni-049000" in 19.375µs
	I1207 12:26:31.668202    5719 start.go:96] Skipping create...Using existing machine configuration
	I1207 12:26:31.668206    5719 fix.go:54] fixHost starting: 
	I1207 12:26:31.668325    5719 fix.go:102] recreateIfNeeded on newest-cni-049000: state=Stopped err=<nil>
	W1207 12:26:31.668334    5719 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 12:26:31.672354    5719 out.go:177] * Restarting existing qemu2 VM for "newest-cni-049000" ...
	I1207 12:26:31.680496    5719 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/newest-cni-049000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/newest-cni-049000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/newest-cni-049000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:9c:0a:73:68:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/newest-cni-049000/disk.qcow2
	I1207 12:26:31.682526    5719 main.go:141] libmachine: STDOUT: 
	I1207 12:26:31.682546    5719 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:26:31.682574    5719 fix.go:56] fixHost completed within 14.365542ms
	I1207 12:26:31.682579    5719 start.go:83] releasing machines lock for "newest-cni-049000", held for 14.380167ms
	W1207 12:26:31.682585    5719 start.go:694] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:26:31.682616    5719 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:26:31.682620    5719 start.go:709] Will try again in 5 seconds ...
	I1207 12:26:36.684735    5719 start.go:365] acquiring machines lock for newest-cni-049000: {Name:mkc8c9d4d3c35484e49fe0ce7fed6c2e097f58ab Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1207 12:26:36.685142    5719 start.go:369] acquired machines lock for "newest-cni-049000" in 281.25µs
	I1207 12:26:36.685257    5719 start.go:96] Skipping create...Using existing machine configuration
	I1207 12:26:36.685282    5719 fix.go:54] fixHost starting: 
	I1207 12:26:36.686035    5719 fix.go:102] recreateIfNeeded on newest-cni-049000: state=Stopped err=<nil>
	W1207 12:26:36.686065    5719 fix.go:128] unexpected machine state, will restart: <nil>
	I1207 12:26:36.696271    5719 out.go:177] * Restarting existing qemu2 VM for "newest-cni-049000" ...
	I1207 12:26:36.700499    5719 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/newest-cni-049000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/newest-cni-049000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/newest-cni-049000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:9c:0a:73:68:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17719-1328/.minikube/machines/newest-cni-049000/disk.qcow2
	I1207 12:26:36.709932    5719 main.go:141] libmachine: STDOUT: 
	I1207 12:26:36.710017    5719 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1207 12:26:36.710100    5719 fix.go:56] fixHost completed within 24.820542ms
	I1207 12:26:36.710116    5719 start.go:83] releasing machines lock for "newest-cni-049000", held for 24.951458ms
	W1207 12:26:36.710328    5719 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-049000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-049000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1207 12:26:36.719147    5719 out.go:177] 
	W1207 12:26:36.723291    5719 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1207 12:26:36.723316    5719 out.go:239] * 
	* 
	W1207 12:26:36.725794    5719 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:26:36.734243    5719 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-049000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-049000 -n newest-cni-049000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-049000 -n newest-cni-049000: exit status 7 (73.334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-049000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-049000 image list --format=json
start_stop_delete_test.go:304: v1.29.0-rc.1 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.10-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.0-rc.1",
- 	"registry.k8s.io/kube-controller-manager:v1.29.0-rc.1",
- 	"registry.k8s.io/kube-proxy:v1.29.0-rc.1",
- 	"registry.k8s.io/kube-scheduler:v1.29.0-rc.1",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-049000 -n newest-cni-049000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-049000 -n newest-cni-049000: exit status 7 (31.943958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-049000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-049000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-049000 --alsologtostderr -v=1: exit status 89 (42.520125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-049000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:26:36.930730    5739 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:26:36.930905    5739 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:26:36.930909    5739 out.go:309] Setting ErrFile to fd 2...
	I1207 12:26:36.930911    5739 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:26:36.931037    5739 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:26:36.931272    5739 out.go:303] Setting JSON to false
	I1207 12:26:36.931280    5739 mustload.go:65] Loading cluster: newest-cni-049000
	I1207 12:26:36.931475    5739 config.go:182] Loaded profile config "newest-cni-049000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.1
	I1207 12:26:36.935163    5739 out.go:177] * The control plane node must be running for this command
	I1207 12:26:36.938172    5739 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-049000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-049000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-049000 -n newest-cni-049000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-049000 -n newest-cni-049000: exit status 7 (31.939125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-049000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-049000 -n newest-cni-049000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-049000 -n newest-cni-049000: exit status 7 (32.246583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-049000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (154/266)

Order passed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.1
10 TestDownloadOnly/v1.28.4/json-events 43.25
11 TestDownloadOnly/v1.28.4/preload-exists 0
14 TestDownloadOnly/v1.28.4/kubectl 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.08
17 TestDownloadOnly/v1.29.0-rc.1/json-events 45.53
18 TestDownloadOnly/v1.29.0-rc.1/preload-exists 0
21 TestDownloadOnly/v1.29.0-rc.1/kubectl 0
22 TestDownloadOnly/v1.29.0-rc.1/LogsDuration 0.08
23 TestDownloadOnly/DeleteAll 0.23
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.22
26 TestBinaryMirror 0.38
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
32 TestAddons/Setup 124.69
34 TestAddons/parallel/Registry 16.32
36 TestAddons/parallel/InspektorGadget 10.23
37 TestAddons/parallel/MetricsServer 5.28
40 TestAddons/parallel/CSI 50.8
41 TestAddons/parallel/Headlamp 11.39
42 TestAddons/parallel/CloudSpanner 5.19
43 TestAddons/parallel/LocalPath 53.07
44 TestAddons/parallel/NvidiaDevicePlugin 5.17
47 TestAddons/serial/GCPAuth/Namespaces 0.07
48 TestAddons/StoppedEnableDisable 12.29
56 TestHyperKitDriverInstallOrUpdate 8.39
60 TestErrorSpam/start 0.34
61 TestErrorSpam/status 0.24
62 TestErrorSpam/pause 4.47
63 TestErrorSpam/unpause 6.08
64 TestErrorSpam/stop 108.39
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 46.12
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 34.32
71 TestFunctional/serial/KubeContext 0.03
72 TestFunctional/serial/KubectlGetPods 0.05
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.44
76 TestFunctional/serial/CacheCmd/cache/add_local 1.43
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
78 TestFunctional/serial/CacheCmd/cache/list 0.04
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
80 TestFunctional/serial/CacheCmd/cache/cache_reload 0.85
81 TestFunctional/serial/CacheCmd/cache/delete 0.08
82 TestFunctional/serial/MinikubeKubectlCmd 0.47
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.59
84 TestFunctional/serial/ExtraConfig 37.9
85 TestFunctional/serial/ComponentHealth 0.04
86 TestFunctional/serial/LogsCmd 0.66
87 TestFunctional/serial/LogsFileCmd 0.64
88 TestFunctional/serial/InvalidService 4.22
90 TestFunctional/parallel/ConfigCmd 0.23
91 TestFunctional/parallel/DashboardCmd 9.96
92 TestFunctional/parallel/DryRun 0.23
93 TestFunctional/parallel/InternationalLanguage 0.13
94 TestFunctional/parallel/StatusCmd 0.25
99 TestFunctional/parallel/AddonsCmd 0.13
100 TestFunctional/parallel/PersistentVolumeClaim 26.49
102 TestFunctional/parallel/SSHCmd 0.13
103 TestFunctional/parallel/CpCmd 0.29
105 TestFunctional/parallel/FileSync 0.07
106 TestFunctional/parallel/CertSync 0.41
110 TestFunctional/parallel/NodeLabels 0.05
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.08
114 TestFunctional/parallel/License 0.2
115 TestFunctional/parallel/Version/short 0.04
116 TestFunctional/parallel/Version/components 0.22
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.08
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
121 TestFunctional/parallel/ImageCommands/ImageBuild 2.29
122 TestFunctional/parallel/ImageCommands/Setup 1.79
123 TestFunctional/parallel/DockerEnv/bash 0.4
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.06
127 TestFunctional/parallel/ServiceCmd/DeployApp 13.1
128 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.12
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.51
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.84
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.49
132 TestFunctional/parallel/ImageCommands/ImageRemove 0.16
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.6
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.61
136 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 1.7
137 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
139 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.12
140 TestFunctional/parallel/ServiceCmd/List 0.09
141 TestFunctional/parallel/ServiceCmd/JSONOutput 0.09
142 TestFunctional/parallel/ServiceCmd/HTTPS 0.11
143 TestFunctional/parallel/ServiceCmd/Format 0.1
144 TestFunctional/parallel/ServiceCmd/URL 0.1
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
146 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
147 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.03
148 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
149 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
150 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
151 TestFunctional/parallel/ProfileCmd/profile_not_create 0.18
152 TestFunctional/parallel/ProfileCmd/profile_list 0.15
153 TestFunctional/parallel/ProfileCmd/profile_json_output 0.15
154 TestFunctional/parallel/MountCmd/any-port 5.26
155 TestFunctional/parallel/MountCmd/specific-port 2.93
156 TestFunctional/parallel/MountCmd/VerifyCleanup 0.9
157 TestFunctional/delete_addon-resizer_images 0.11
158 TestFunctional/delete_my-image_image 0.04
159 TestFunctional/delete_minikube_cached_images 0.04
163 TestImageBuild/serial/Setup 30.86
164 TestImageBuild/serial/NormalBuild 1.61
166 TestImageBuild/serial/BuildWithDockerIgnore 0.15
167 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.1
170 TestIngressAddonLegacy/StartLegacyK8sCluster 70.27
172 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 15.35
173 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.25
177 TestJSONOutput/start/Command 43.82
178 TestJSONOutput/start/Audit 0
180 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/pause/Command 0.28
184 TestJSONOutput/pause/Audit 0
186 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/unpause/Command 0.23
190 TestJSONOutput/unpause/Audit 0
192 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/stop/Command 12.08
196 TestJSONOutput/stop/Audit 0
198 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
200 TestErrorJSONOutput 0.32
205 TestMainNoArgs 0.03
206 TestMinikubeProfile 64.69
263 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
267 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
268 TestNoKubernetes/serial/ProfileList 0.15
269 TestNoKubernetes/serial/Stop 0.06
271 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
289 TestStartStop/group/old-k8s-version/serial/Stop 0.07
290 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.1
294 TestStartStop/group/no-preload/serial/Stop 0.07
295 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
311 TestStartStop/group/embed-certs/serial/Stop 0.07
312 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.1
316 TestStartStop/group/default-k8s-diff-port/serial/Stop 0.07
317 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.1
329 TestStartStop/group/newest-cni/serial/DeployApp 0
330 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
331 TestStartStop/group/newest-cni/serial/Stop 0.07
332 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.1
334 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-080000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-080000: exit status 85 (94.934667ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-080000 | jenkins | v1.32.0 | 07 Dec 23 12:00 PST |          |
	|         | -p download-only-080000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 12:00:15
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 12:00:15.740971    1770 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:00:15.741147    1770 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:00:15.741150    1770 out.go:309] Setting ErrFile to fd 2...
	I1207 12:00:15.741152    1770 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:00:15.741314    1770 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	W1207 12:00:15.741397    1770 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17719-1328/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17719-1328/.minikube/config/config.json: no such file or directory
	I1207 12:00:15.742594    1770 out.go:303] Setting JSON to true
	I1207 12:00:15.759710    1770 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1786,"bootTime":1701977429,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:00:15.759800    1770 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:00:15.765519    1770 out.go:97] [download-only-080000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:00:15.769499    1770 out.go:169] MINIKUBE_LOCATION=17719
	I1207 12:00:15.765623    1770 notify.go:220] Checking for updates...
	W1207 12:00:15.765640    1770 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball: no such file or directory
	I1207 12:00:15.776538    1770 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:00:15.779549    1770 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:00:15.782563    1770 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:00:15.785557    1770 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	W1207 12:00:15.791489    1770 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1207 12:00:15.791669    1770 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:00:15.798495    1770 out.go:97] Using the qemu2 driver based on user configuration
	I1207 12:00:15.798506    1770 start.go:298] selected driver: qemu2
	I1207 12:00:15.798508    1770 start.go:902] validating driver "qemu2" against <nil>
	I1207 12:00:15.798575    1770 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1207 12:00:15.803433    1770 out.go:169] Automatically selected the socket_vmnet network
	I1207 12:00:15.810344    1770 start_flags.go:394] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1207 12:00:15.810424    1770 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1207 12:00:15.810528    1770 cni.go:84] Creating CNI manager for ""
	I1207 12:00:15.810544    1770 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1207 12:00:15.810548    1770 start_flags.go:323] config:
	{Name:download-only-080000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-080000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:00:15.816104    1770 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:00:15.820345    1770 out.go:97] Downloading VM boot image ...
	I1207 12:00:15.820359    1770 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/iso/arm64/minikube-v1.32.1-1701788780-17711-arm64.iso
	I1207 12:00:23.213934    1770 out.go:97] Starting control plane node download-only-080000 in cluster download-only-080000
	I1207 12:00:23.213959    1770 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1207 12:00:23.272647    1770 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1207 12:00:23.272674    1770 cache.go:56] Caching tarball of preloaded images
	I1207 12:00:23.272812    1770 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1207 12:00:23.276882    1770 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1207 12:00:23.276889    1770 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1207 12:00:23.356383    1770 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I1207 12:00:32.969728    1770 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1207 12:00:32.969894    1770 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I1207 12:00:33.613776    1770 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1207 12:00:33.613989    1770 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/download-only-080000/config.json ...
	I1207 12:00:33.614005    1770 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/download-only-080000/config.json: {Name:mk5e2a90cd9a8bee2269d74db23564da3145f35a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1207 12:00:33.614239    1770 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1207 12:00:33.614417    1770 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I1207 12:00:34.460524    1770 out.go:169] 
	W1207 12:00:34.468559    1770 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/17719-1328/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108a1ca80 0x108a1ca80 0x108a1ca80 0x108a1ca80 0x108a1ca80 0x108a1ca80 0x108a1ca80] Decompressors:map[bz2:0x1400080cca0 gz:0x1400080cca8 tar:0x1400080cc50 tar.bz2:0x1400080cc60 tar.gz:0x1400080cc70 tar.xz:0x1400080cc80 tar.zst:0x1400080cc90 tbz2:0x1400080cc60 tgz:0x1400080cc70 txz:0x1400080cc80 tzst:0x1400080cc90 xz:0x1400080ccb0 zip:0x1400080ccc0 zst:0x1400080ccb8] Getters:map[file:0x14002144570 http:0x14000518230 https:0x14000518280] Dir:false ProgressListener
:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1207 12:00:34.468593    1770 out_reason.go:110] 
	W1207 12:00:34.475490    1770 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1207 12:00:34.478360    1770 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-080000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (43.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-080000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-080000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=qemu2 : (43.250387458s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (43.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-080000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-080000: exit status 85 (82.210875ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-080000 | jenkins | v1.32.0 | 07 Dec 23 12:00 PST |          |
	|         | -p download-only-080000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-080000 | jenkins | v1.32.0 | 07 Dec 23 12:00 PST |          |
	|         | -p download-only-080000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 12:00:34
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 12:00:34.675066    1791 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:00:34.675216    1791 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:00:34.675219    1791 out.go:309] Setting ErrFile to fd 2...
	I1207 12:00:34.675222    1791 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:00:34.675353    1791 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	W1207 12:00:34.675427    1791 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17719-1328/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17719-1328/.minikube/config/config.json: no such file or directory
	I1207 12:00:34.676378    1791 out.go:303] Setting JSON to true
	I1207 12:00:34.692170    1791 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1805,"bootTime":1701977429,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:00:34.692241    1791 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:00:34.696761    1791 out.go:97] [download-only-080000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:00:34.700740    1791 out.go:169] MINIKUBE_LOCATION=17719
	I1207 12:00:34.696881    1791 notify.go:220] Checking for updates...
	I1207 12:00:34.707747    1791 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:00:34.710822    1791 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:00:34.713817    1791 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:00:34.716818    1791 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	W1207 12:00:34.722782    1791 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1207 12:00:34.723119    1791 config.go:182] Loaded profile config "download-only-080000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1207 12:00:34.723152    1791 start.go:810] api.Load failed for download-only-080000: filestore "download-only-080000": Docker machine "download-only-080000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1207 12:00:34.723201    1791 driver.go:392] Setting default libvirt URI to qemu:///system
	W1207 12:00:34.723214    1791 start.go:810] api.Load failed for download-only-080000: filestore "download-only-080000": Docker machine "download-only-080000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1207 12:00:34.726755    1791 out.go:97] Using the qemu2 driver based on existing profile
	I1207 12:00:34.726761    1791 start.go:298] selected driver: qemu2
	I1207 12:00:34.726764    1791 start.go:902] validating driver "qemu2" against &{Name:download-only-080000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-080000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:00:34.728932    1791 cni.go:84] Creating CNI manager for ""
	I1207 12:00:34.728949    1791 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 12:00:34.728956    1791 start_flags.go:323] config:
	{Name:download-only-080000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-080000 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:00:34.733167    1791 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:00:34.735801    1791 out.go:97] Starting control plane node download-only-080000 in cluster download-only-080000
	I1207 12:00:34.735809    1791 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 12:00:34.792607    1791 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1207 12:00:34.792617    1791 cache.go:56] Caching tarball of preloaded images
	I1207 12:00:34.792761    1791 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 12:00:34.797845    1791 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1207 12:00:34.797852    1791 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I1207 12:00:34.877789    1791 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4?checksum=md5:6fb922d1d9dc01a9d3c0b965ed219613 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I1207 12:00:41.959008    1791 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I1207 12:00:41.959171    1791 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I1207 12:00:42.543349    1791 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on docker
	I1207 12:00:42.543420    1791 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/download-only-080000/config.json ...
	I1207 12:00:42.543739    1791 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I1207 12:00:42.543890    1791 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/darwin/arm64/v1.28.4/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-080000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/json-events (45.53s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-080000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-080000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.1 --container-runtime=docker --driver=qemu2 : (45.53088175s)
--- PASS: TestDownloadOnly/v1.29.0-rc.1/json-events (45.53s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-080000
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-080000: exit status 85 (78.754333ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-080000 | jenkins | v1.32.0 | 07 Dec 23 12:00 PST |          |
	|         | -p download-only-080000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=qemu2                    |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-080000 | jenkins | v1.32.0 | 07 Dec 23 12:00 PST |          |
	|         | -p download-only-080000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=qemu2                    |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-080000 | jenkins | v1.32.0 | 07 Dec 23 12:01 PST |          |
	|         | -p download-only-080000           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.1 |                      |         |         |                     |          |
	|         | --container-runtime=docker        |                      |         |         |                     |          |
	|         | --driver=qemu2                    |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/12/07 12:01:18
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.21.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1207 12:01:18.008805    1823 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:01:18.008945    1823 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:01:18.008949    1823 out.go:309] Setting ErrFile to fd 2...
	I1207 12:01:18.008951    1823 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:01:18.009085    1823 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	W1207 12:01:18.009155    1823 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17719-1328/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17719-1328/.minikube/config/config.json: no such file or directory
	I1207 12:01:18.010011    1823 out.go:303] Setting JSON to true
	I1207 12:01:18.025765    1823 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1849,"bootTime":1701977429,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:01:18.025858    1823 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:01:18.029478    1823 out.go:97] [download-only-080000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:01:18.033520    1823 out.go:169] MINIKUBE_LOCATION=17719
	I1207 12:01:18.029556    1823 notify.go:220] Checking for updates...
	I1207 12:01:18.041543    1823 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:01:18.044608    1823 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:01:18.047635    1823 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:01:18.050589    1823 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	W1207 12:01:18.056614    1823 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1207 12:01:18.056879    1823 config.go:182] Loaded profile config "download-only-080000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	W1207 12:01:18.056902    1823 start.go:810] api.Load failed for download-only-080000: filestore "download-only-080000": Docker machine "download-only-080000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1207 12:01:18.056953    1823 driver.go:392] Setting default libvirt URI to qemu:///system
	W1207 12:01:18.056969    1823 start.go:810] api.Load failed for download-only-080000: filestore "download-only-080000": Docker machine "download-only-080000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1207 12:01:18.060543    1823 out.go:97] Using the qemu2 driver based on existing profile
	I1207 12:01:18.060554    1823 start.go:298] selected driver: qemu2
	I1207 12:01:18.060557    1823 start.go:902] validating driver "qemu2" against &{Name:download-only-080000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-080000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:01:18.062887    1823 cni.go:84] Creating CNI manager for ""
	I1207 12:01:18.062909    1823 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1207 12:01:18.062916    1823 start_flags.go:323] config:
	{Name:download-only-080000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.1 ClusterName:download-only-080000 Namespac
e:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:01:18.067257    1823 iso.go:125] acquiring lock: {Name:mkd8feee106937d4e5be156a6b4e5ad41cefa122 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1207 12:01:18.070619    1823 out.go:97] Starting control plane node download-only-080000 in cluster download-only-080000
	I1207 12:01:18.070629    1823 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime docker
	I1207 12:01:18.123718    1823 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.1/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4
	I1207 12:01:18.123735    1823 cache.go:56] Caching tarball of preloaded images
	I1207 12:01:18.123872    1823 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime docker
	I1207 12:01:18.128458    1823 out.go:97] Downloading Kubernetes v1.29.0-rc.1 preload ...
	I1207 12:01:18.128465    1823 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4 ...
	I1207 12:01:18.218237    1823 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.1/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4?checksum=md5:e6c70ba8af96149bcd57a348676cbfba -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4
	I1207 12:01:25.372240    1823 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4 ...
	I1207 12:01:25.372391    1823 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.1-docker-overlay2-arm64.tar.lz4 ...
	I1207 12:01:25.927239    1823 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.1 on docker
	I1207 12:01:25.927314    1823 profile.go:148] Saving config to /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/download-only-080000/config.json ...
	I1207 12:01:25.927602    1823 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.1 and runtime docker
	I1207 12:01:25.927747    1823 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.1/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17719-1328/.minikube/cache/darwin/arm64/v1.29.0-rc.1/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-080000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-080000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.38s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-032000 --alsologtostderr --binary-mirror http://127.0.0.1:49324 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-032000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-032000
--- PASS: TestBinaryMirror (0.38s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-210000
addons_test.go:927: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-210000: exit status 85 (57.333708ms)

                                                
                                                
-- stdout --
	* Profile "addons-210000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-210000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-210000
addons_test.go:938: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-210000: exit status 85 (60.889958ms)

                                                
                                                
-- stdout --
	* Profile "addons-210000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-210000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (124.69s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-210000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Done: out/minikube-darwin-arm64 start -p addons-210000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns: (2m4.694618958s)
--- PASS: TestAddons/Setup (124.69s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 7.329125ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-m88bt" [620e4512-b9af-4a27-a4f2-5cc01e9c16f8] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008256959s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-jzhjw" [ee5f6d59-e235-4dd7-b05f-6bcfcfcbd417] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.008165792s
addons_test.go:339: (dbg) Run:  kubectl --context addons-210000 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-210000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-210000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.932584416s)
addons_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p addons-210000 ip
2023/12/07 12:04:25 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:387: (dbg) Run:  out/minikube-darwin-arm64 -p addons-210000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.32s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.23s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-5qq6z" [26d7ac39-73b7-4b89-adac-90d902c27765] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.007955584s
addons_test.go:840: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-210000
addons_test.go:840: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-210000: (5.225312541s)
--- PASS: TestAddons/parallel/InspektorGadget (10.23s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 2.364ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-jqrnk" [2620c749-152a-4a17-aa53-3b65dbc618b7] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008460625s
addons_test.go:414: (dbg) Run:  kubectl --context addons-210000 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-darwin-arm64 -p addons-210000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.28s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.8s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 8.03125ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-210000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-210000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-210000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-210000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-210000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-210000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-210000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-210000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-210000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-210000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-210000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [e65ddfe4-cc3c-4a59-951d-03fad9ed4bc8] Pending
helpers_test.go:344: "task-pv-pod" [e65ddfe4-cc3c-4a59-951d-03fad9ed4bc8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [e65ddfe4-cc3c-4a59-951d-03fad9ed4bc8] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.010269292s
addons_test.go:583: (dbg) Run:  kubectl --context addons-210000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-210000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-210000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-210000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-210000 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-210000 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-210000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-210000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-210000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-210000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-210000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-210000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-210000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-210000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-210000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-210000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-210000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-210000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-210000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-210000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-210000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-210000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-210000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6b92bb45-c1b5-4c01-ba8f-9ef37e85a640] Pending
helpers_test.go:344: "task-pv-pod-restore" [6b92bb45-c1b5-4c01-ba8f-9ef37e85a640] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6b92bb45-c1b5-4c01-ba8f-9ef37e85a640] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.011541333s
addons_test.go:625: (dbg) Run:  kubectl --context addons-210000 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-210000 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-210000 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-darwin-arm64 -p addons-210000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-darwin-arm64 -p addons-210000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.098649542s)
addons_test.go:641: (dbg) Run:  out/minikube-darwin-arm64 -p addons-210000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (50.80s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-210000 --alsologtostderr -v=1
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-rlqm4" [be52fcab-6ea7-4ce3-a59a-75cdb0017dc4] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-rlqm4" [be52fcab-6ea7-4ce3-a59a-75cdb0017dc4] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.007352167s
--- PASS: TestAddons/parallel/Headlamp (11.39s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.19s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-wh4h8" [09041e00-f497-49c4-a0b6-98f28915a6c3] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.007566417s
addons_test.go:859: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-210000
--- PASS: TestAddons/parallel/CloudSpanner (5.19s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.07s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-210000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-210000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-210000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-210000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-210000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-210000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-210000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-210000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [0c6079d2-1409-4b04-8ed6-16f22dfd1059] Pending
helpers_test.go:344: "test-local-path" [0c6079d2-1409-4b04-8ed6-16f22dfd1059] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [0c6079d2-1409-4b04-8ed6-16f22dfd1059] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [0c6079d2-1409-4b04-8ed6-16f22dfd1059] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.006820958s
addons_test.go:890: (dbg) Run:  kubectl --context addons-210000 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-darwin-arm64 -p addons-210000 ssh "cat /opt/local-path-provisioner/pvc-1d29993c-57ff-4743-8c6b-7badd87bfdbd_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-210000 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-210000 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-darwin-arm64 -p addons-210000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-darwin-arm64 -p addons-210000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.317598583s)
--- PASS: TestAddons/parallel/LocalPath (53.07s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.17s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-6slck" [e708bec9-c211-46f4-9b20-f52f10b9b736] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.007257834s
addons_test.go:954: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-210000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-210000 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-210000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.29s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-210000
addons_test.go:171: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-210000: (12.092029s)
addons_test.go:175: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-210000
addons_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-210000
addons_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-210000
--- PASS: TestAddons/StoppedEnableDisable (12.29s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.39s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.39s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 status: exit status 6 (80.135834ms)

                                                
                                                
-- stdout --
	nospam-890000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 12:06:41.177375    2195 status.go:415] kubeconfig endpoint: extract IP: "nospam-890000" does not appear in /Users/jenkins/minikube-integration/17719-1328/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 status" failed: exit status 6
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 status: exit status 6 (79.413958ms)

                                                
                                                
-- stdout --
	nospam-890000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 12:06:41.257078    2197 status.go:415] kubeconfig endpoint: extract IP: "nospam-890000" does not appear in /Users/jenkins/minikube-integration/17719-1328/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 status" failed: exit status 6
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 status: exit status 6 (79.605667ms)

                                                
                                                
-- stdout --
	nospam-890000
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1207 12:06:41.336799    2199 status.go:415] kubeconfig endpoint: extract IP: "nospam-890000" does not appear in /Users/jenkins/minikube-integration/17719-1328/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 status" failed: exit status 6
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (4.47s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 pause: exit status 80 (1.287488041s)

                                                
                                                
-- stdout --
	* Pausing node nospam-890000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 pause" failed: exit status 80
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 pause: exit status 80 (1.551192333s)

                                                
                                                
-- stdout --
	* Pausing node nospam-890000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 pause" failed: exit status 80
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 pause: exit status 80 (1.628498542s)

                                                
                                                
-- stdout --
	* Pausing node nospam-890000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (4.47s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.08s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 unpause: exit status 80 (2.258028083s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-890000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: kubelet start: sudo systemctl start kubelet: Process exited with status 5
	stdout:
	
	stderr:
	Failed to start kubelet.service: Unit kubelet.service not found.
	
	sudo journalctl --no-pager -u kubelet:
	-- stdout --
	-- Journal begins at Thu 2023-12-07 20:06:34 UTC, ends at Thu 2023-12-07 20:06:48 UTC. --
	-- No entries --
	
	-- /stdout --
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 unpause" failed: exit status 80
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 unpause: exit status 80 (2.051419s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-890000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: kubelet start: sudo systemctl start kubelet: Process exited with status 5
	stdout:
	
	stderr:
	Failed to start kubelet.service: Unit kubelet.service not found.
	
	sudo journalctl --no-pager -u kubelet:
	-- stdout --
	-- Journal begins at Thu 2023-12-07 20:06:34 UTC, ends at Thu 2023-12-07 20:06:50 UTC. --
	-- No entries --
	
	-- /stdout --
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 unpause" failed: exit status 80
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 unpause: exit status 80 (1.770254458s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-890000 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: kubelet start: sudo systemctl start kubelet: Process exited with status 5
	stdout:
	
	stderr:
	Failed to start kubelet.service: Unit kubelet.service not found.
	
	sudo journalctl --no-pager -u kubelet:
	-- stdout --
	-- Journal begins at Thu 2023-12-07 20:06:34 UTC, ends at Thu 2023-12-07 20:06:52 UTC. --
	-- No entries --
	
	-- /stdout --
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.08s)

                                                
                                    
x
+
TestErrorSpam/stop (108.39s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 stop: (1m48.230053416s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-890000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-890000 stop
--- PASS: TestErrorSpam/stop (108.39s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17719-1328/.minikube/files/etc/test/nested/copy/1768/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (46.12s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-469000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E1207 12:09:09.392971    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/client.crt: no such file or directory
E1207 12:09:09.399806    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/client.crt: no such file or directory
E1207 12:09:09.410827    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/client.crt: no such file or directory
E1207 12:09:09.432864    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/client.crt: no such file or directory
E1207 12:09:09.474917    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/client.crt: no such file or directory
E1207 12:09:09.557009    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/client.crt: no such file or directory
E1207 12:09:09.719108    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/client.crt: no such file or directory
E1207 12:09:10.041185    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/client.crt: no such file or directory
E1207 12:09:10.683294    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/client.crt: no such file or directory
E1207 12:09:11.964788    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/client.crt: no such file or directory
E1207 12:09:14.524973    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/client.crt: no such file or directory
E1207 12:09:19.646949    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-469000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (46.117132083s)
--- PASS: TestFunctional/serial/StartWithProxy (46.12s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.32s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-469000 --alsologtostderr -v=8
E1207 12:09:29.888987    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/client.crt: no such file or directory
E1207 12:09:50.370710    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-469000 --alsologtostderr -v=8: (34.320399584s)
functional_test.go:659: soft start took 34.320816375s for "functional-469000" cluster.
--- PASS: TestFunctional/serial/SoftStart (34.32s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-469000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-469000 cache add registry.k8s.io/pause:3.1: (1.229808791s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-469000 cache add registry.k8s.io/pause:3.3: (1.235209292s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-469000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local402952379/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 cache add minikube-local-cache-test:functional-469000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 cache delete minikube-local-cache-test:functional-469000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-469000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-469000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (74.784625ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (0.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 kubectl -- --context functional-469000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.47s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.59s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-469000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.59s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.9s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-469000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1207 12:10:31.331870    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/addons-210000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-469000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.898369875s)
functional_test.go:757: restart took 37.898479791s for "functional-469000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.90s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-469000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd3637895021/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.64s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.22s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-469000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-469000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-469000: exit status 115 (111.85425ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:32251 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-469000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.22s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-469000 config get cpus: exit status 14 (33.391416ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-469000 config get cpus: exit status 14 (33.451292ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-469000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-469000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2802: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.96s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-469000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-469000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (120.943541ms)

                                                
                                                
-- stdout --
	* [functional-469000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:11:42.700777    2784 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:11:42.700924    2784 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:11:42.700926    2784 out.go:309] Setting ErrFile to fd 2...
	I1207 12:11:42.700929    2784 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:11:42.701049    2784 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:11:42.702059    2784 out.go:303] Setting JSON to false
	I1207 12:11:42.719558    2784 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2473,"bootTime":1701977429,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:11:42.719651    2784 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:11:42.724952    2784 out.go:177] * [functional-469000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	I1207 12:11:42.732930    2784 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:11:42.735970    2784 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:11:42.732995    2784 notify.go:220] Checking for updates...
	I1207 12:11:42.742967    2784 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:11:42.745945    2784 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:11:42.748958    2784 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:11:42.751929    2784 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:11:42.755224    2784 config.go:182] Loaded profile config "functional-469000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:11:42.755456    2784 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:11:42.759990    2784 out.go:177] * Using the qemu2 driver based on existing profile
	I1207 12:11:42.766921    2784 start.go:298] selected driver: qemu2
	I1207 12:11:42.766928    2784 start.go:902] validating driver "qemu2" against &{Name:functional-469000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:functional-469000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:11:42.766975    2784 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:11:42.772942    2784 out.go:177] 
	W1207 12:11:42.776963    2784 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1207 12:11:42.780957    2784 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-469000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-469000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-469000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (124.912167ms)

                                                
                                                
-- stdout --
	* [functional-469000] minikube v1.32.0 sur Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1207 12:11:42.928153    2795 out.go:296] Setting OutFile to fd 1 ...
	I1207 12:11:42.928324    2795 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:11:42.928328    2795 out.go:309] Setting ErrFile to fd 2...
	I1207 12:11:42.928330    2795 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1207 12:11:42.928457    2795 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
	I1207 12:11:42.929847    2795 out.go:303] Setting JSON to false
	I1207 12:11:42.946745    2795 start.go:128] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2473,"bootTime":1701977429,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.1.2","kernelVersion":"23.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1207 12:11:42.946833    2795 start.go:136] gopshost.Virtualization returned error: not implemented yet
	I1207 12:11:42.952011    2795 out.go:177] * [functional-469000] minikube v1.32.0 sur Darwin 14.1.2 (arm64)
	I1207 12:11:42.958982    2795 out.go:177]   - MINIKUBE_LOCATION=17719
	I1207 12:11:42.962990    2795 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	I1207 12:11:42.959087    2795 notify.go:220] Checking for updates...
	I1207 12:11:42.968958    2795 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1207 12:11:42.971972    2795 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1207 12:11:42.978929    2795 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	I1207 12:11:42.986878    2795 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1207 12:11:42.991175    2795 config.go:182] Loaded profile config "functional-469000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I1207 12:11:42.991435    2795 driver.go:392] Setting default libvirt URI to qemu:///system
	I1207 12:11:42.995795    2795 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1207 12:11:43.002938    2795 start.go:298] selected driver: qemu2
	I1207 12:11:43.002944    2795 start.go:902] validating driver "qemu2" against &{Name:functional-469000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17711/minikube-v1.32.1-1701788780-17711-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1701974066-17719@sha256:cec630e7d143790c46e2dc54dbb8f39a22d8ede3e3c25e34638082e2c107a85c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:functional-469000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1207 12:11:43.002991    2795 start.go:913] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1207 12:11:43.010018    2795 out.go:177] 
	W1207 12:11:43.013947    2795 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1207 12:11:43.017954    2795 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8a3e6ab7-cbea-4dfd-82b4-f39c0deda91f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.006948417s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-469000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-469000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-469000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-469000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e4a1e1ad-69c6-4788-b6b1-c4b752891f30] Pending
helpers_test.go:344: "sp-pod" [e4a1e1ad-69c6-4788-b6b1-c4b752891f30] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e4a1e1ad-69c6-4788-b6b1-c4b752891f30] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.008656459s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-469000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-469000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-469000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0129b1e9-6971-47c1-8e56-e08bb7365f22] Pending
helpers_test.go:344: "sp-pod" [0129b1e9-6971-47c1-8e56-e08bb7365f22] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0129b1e9-6971-47c1-8e56-e08bb7365f22] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.007339s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-469000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.49s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh -n functional-469000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 cp functional-469000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2767119728/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh -n functional-469000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1768/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh "sudo cat /etc/test/nested/copy/1768/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1768.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh "sudo cat /etc/ssl/certs/1768.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1768.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh "sudo cat /usr/share/ca-certificates/1768.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/17682.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh "sudo cat /etc/ssl/certs/17682.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/17682.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh "sudo cat /usr/share/ca-certificates/17682.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-469000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-469000 ssh "sudo systemctl is-active crio": exit status 1 (83.395875ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-469000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-469000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-469000
docker.io/kubernetesui/metrics-scraper:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-469000 image ls --format short --alsologtostderr:
I1207 12:11:46.104225    2825 out.go:296] Setting OutFile to fd 1 ...
I1207 12:11:46.104601    2825 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 12:11:46.104607    2825 out.go:309] Setting ErrFile to fd 2...
I1207 12:11:46.104610    2825 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 12:11:46.104744    2825 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
I1207 12:11:46.105140    2825 config.go:182] Loaded profile config "functional-469000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1207 12:11:46.105202    2825 config.go:182] Loaded profile config "functional-469000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1207 12:11:46.106085    2825 ssh_runner.go:195] Run: systemctl --version
I1207 12:11:46.106095    2825 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/functional-469000/id_rsa Username:docker}
I1207 12:11:46.132812    2825 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-469000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-apiserver              | v1.28.4           | 04b4c447bb9d4 | 120MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 9cdd6470f48c8 | 181MB  |
| gcr.io/google-containers/addon-resizer      | functional-469000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/localhost/my-image                | functional-469000 | 84941deba2b8a | 1.41MB |
| docker.io/library/nginx                     | alpine            | f09fc93534f6a | 43.4MB |
| docker.io/library/nginx                     | latest            | 5628e5ea3c17f | 192MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/kube-controller-manager     | v1.28.4           | 9961cbceaf234 | 116MB  |
| registry.k8s.io/kube-scheduler              | v1.28.4           | 05c284c929889 | 57.8MB |
| registry.k8s.io/kube-proxy                  | v1.28.4           | 3ca3ca488cf13 | 68.4MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-469000 | 938f21ba313c6 | 30B    |
| gcr.io/k8s-minikube/busybox                 | latest            | 71a676dd070f4 | 1.41MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-469000 image ls --format table --alsologtostderr:
I1207 12:11:48.624832    2837 out.go:296] Setting OutFile to fd 1 ...
I1207 12:11:48.625014    2837 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 12:11:48.625018    2837 out.go:309] Setting ErrFile to fd 2...
I1207 12:11:48.625021    2837 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 12:11:48.625165    2837 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
I1207 12:11:48.625587    2837 config.go:182] Loaded profile config "functional-469000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1207 12:11:48.625654    2837 config.go:182] Loaded profile config "functional-469000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1207 12:11:48.626679    2837 ssh_runner.go:195] Run: systemctl --version
I1207 12:11:48.626691    2837 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/functional-469000/id_rsa Username:docker}
I1207 12:11:48.652040    2837 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2023/12/07 12:11:52 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-469000 image ls --format json --alsologtostderr:
[{"id":"9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"181000000"},{"id":"97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"51400000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"5628e5ea3c17fa1cbf49692edf41d5a1cdf198922898e6ffb29c19768dca8fd3","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"120000000"},{"id":"3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c3
9","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"68400000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1410000"},{"id":"84941deba2b8afd740e973ee044ded2e4fc3a99fc23679d16bd97ae789b2f6ca","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-469000"],"size":"1410000"},{"id":"f09fc93534f6a80e1cb9ad70fe8c697b1596faa9f1b50895f203bc02feb9ebb8","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43400000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["regis
try.k8s.io/pause:3.1"],"size":"525000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"938f21ba313c64c2a36e5f9fdadcb0fd69424e0f04548e95ec5d700271412397","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-469000"],"size":"30"},{"id":"05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"57800000"},{"id":"9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"116000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"
id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-469000"],"size":"32900000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-469000 image ls --format json --alsologtostderr:
I1207 12:11:48.547129    2835 out.go:296] Setting OutFile to fd 1 ...
I1207 12:11:48.547342    2835 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 12:11:48.547345    2835 out.go:309] Setting ErrFile to fd 2...
I1207 12:11:48.547348    2835 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 12:11:48.547487    2835 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
I1207 12:11:48.547933    2835 config.go:182] Loaded profile config "functional-469000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1207 12:11:48.548001    2835 config.go:182] Loaded profile config "functional-469000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1207 12:11:48.549005    2835 ssh_runner.go:195] Run: systemctl --version
I1207 12:11:48.549014    2835 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/functional-469000/id_rsa Username:docker}
I1207 12:11:48.574267    2835 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-469000 image ls --format yaml --alsologtostderr:
- id: f09fc93534f6a80e1cb9ad70fe8c697b1596faa9f1b50895f203bc02feb9ebb8
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43400000"
- id: 04b4c447bb9d4840af3bf7e836397379d65df87c86e55dcd27f31a8d11df2419
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "120000000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 05c284c929889d88306fdb3dd14ee2d0132543740f9e247685243214fc3d2c54
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "57800000"
- id: 3ca3ca488cf13fde14cfc4b3ffde0c53a8c161b030f4a444a797fba6aef38c39
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "68400000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-469000
size: "32900000"
- id: 9961cbceaf234d59b7dcf8a197a024f3e3ce4b7fe2b67c2378efd3d209ca994b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "116000000"
- id: 9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "181000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 938f21ba313c64c2a36e5f9fdadcb0fd69424e0f04548e95ec5d700271412397
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-469000
size: "30"
- id: 5628e5ea3c17fa1cbf49692edf41d5a1cdf198922898e6ffb29c19768dca8fd3
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-469000 image ls --format yaml --alsologtostderr:
I1207 12:11:46.178928    2827 out.go:296] Setting OutFile to fd 1 ...
I1207 12:11:46.179126    2827 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 12:11:46.179134    2827 out.go:309] Setting ErrFile to fd 2...
I1207 12:11:46.179137    2827 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 12:11:46.179284    2827 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
I1207 12:11:46.179712    2827 config.go:182] Loaded profile config "functional-469000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1207 12:11:46.179772    2827 config.go:182] Loaded profile config "functional-469000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1207 12:11:46.180686    2827 ssh_runner.go:195] Run: systemctl --version
I1207 12:11:46.180696    2827 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/functional-469000/id_rsa Username:docker}
I1207 12:11:46.206044    2827 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-469000 ssh pgrep buildkitd: exit status 1 (60.50425ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 image build -t localhost/my-image:functional-469000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-469000 image build -t localhost/my-image:functional-469000 testdata/build --alsologtostderr: (2.155745833s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-469000 image build -t localhost/my-image:functional-469000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in 442df3389430
Removing intermediate container 442df3389430
---> 8a82eba368ec
Step 3/3 : ADD content.txt /
---> 84941deba2b8
Successfully built 84941deba2b8
Successfully tagged localhost/my-image:functional-469000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-469000 image build -t localhost/my-image:functional-469000 testdata/build --alsologtostderr:
I1207 12:11:46.311669    2831 out.go:296] Setting OutFile to fd 1 ...
I1207 12:11:46.311908    2831 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 12:11:46.311912    2831 out.go:309] Setting ErrFile to fd 2...
I1207 12:11:46.311915    2831 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1207 12:11:46.312057    2831 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17719-1328/.minikube/bin
I1207 12:11:46.312495    2831 config.go:182] Loaded profile config "functional-469000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1207 12:11:46.313233    2831 config.go:182] Loaded profile config "functional-469000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I1207 12:11:46.314212    2831 ssh_runner.go:195] Run: systemctl --version
I1207 12:11:46.314221    2831 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/17719-1328/.minikube/machines/functional-469000/id_rsa Username:docker}
I1207 12:11:46.340515    2831 build_images.go:151] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.2285811801.tar
I1207 12:11:46.340577    2831 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1207 12:11:46.343399    2831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2285811801.tar
I1207 12:11:46.344946    2831 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2285811801.tar: stat -c "%s %y" /var/lib/minikube/build/build.2285811801.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2285811801.tar': No such file or directory
I1207 12:11:46.344961    2831 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.2285811801.tar --> /var/lib/minikube/build/build.2285811801.tar (3072 bytes)
I1207 12:11:46.353471    2831 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2285811801
I1207 12:11:46.356164    2831 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2285811801 -xf /var/lib/minikube/build/build.2285811801.tar
I1207 12:11:46.359288    2831 docker.go:346] Building image: /var/lib/minikube/build/build.2285811801
I1207 12:11:46.359331    2831 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-469000 /var/lib/minikube/build/build.2285811801
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I1207 12:11:48.423970    2831 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-469000 /var/lib/minikube/build/build.2285811801: (2.064677042s)
I1207 12:11:48.424043    2831 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2285811801
I1207 12:11:48.427100    2831 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2285811801.tar
I1207 12:11:48.430197    2831 build_images.go:207] Built localhost/my-image:functional-469000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.2285811801.tar
I1207 12:11:48.430216    2831 build_images.go:123] succeeded building to: functional-469000
I1207 12:11:48.430218    2831 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.751635291s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-469000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-469000 docker-env) && out/minikube-darwin-arm64 status -p functional-469000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-469000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1436: (dbg) Run:  kubectl --context functional-469000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-469000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-759d89bdcc-728zk" [edf545b7-09a0-4d00-8955-0a840c08c06b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-759d89bdcc-728zk" [edf545b7-09a0-4d00-8955-0a840c08c06b] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.011715167s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 image load --daemon gcr.io/google-containers/addon-resizer:functional-469000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-469000 image load --daemon gcr.io/google-containers/addon-resizer:functional-469000 --alsologtostderr: (2.043698125s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 image load --daemon gcr.io/google-containers/addon-resizer:functional-469000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-469000 image load --daemon gcr.io/google-containers/addon-resizer:functional-469000 --alsologtostderr: (1.430430833s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.786904875s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-469000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 image load --daemon gcr.io/google-containers/addon-resizer:functional-469000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-469000 image load --daemon gcr.io/google-containers/addon-resizer:functional-469000 --alsologtostderr: (1.924821875s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 image save gcr.io/google-containers/addon-resizer:functional-469000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 image rm gcr.io/google-containers/addon-resizer:functional-469000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-469000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 image save --daemon gcr.io/google-containers/addon-resizer:functional-469000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-469000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-469000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-469000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-469000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-469000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2626: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-469000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-469000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [e9cf6b26-1262-4dbf-a900-a8cd9d33088b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [e9cf6b26-1262-4dbf-a900-a8cd9d33088b] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.009369916s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 service list -o json
functional_test.go:1493: Took "91.250291ms" to run "out/minikube-darwin-arm64 -p functional-469000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.105.4:30538
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.105.4:30538
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-469000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.206.0 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-469000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1314: Took "112.807791ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1328: Took "37.574667ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1365: Took "113.206125ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1378: Took "37.446667ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-469000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3280480482/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1701979893588371000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3280480482/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1701979893588371000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3280480482/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1701979893588371000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3280480482/001/test-1701979893588371000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-469000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (68.243916ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  7 20:11 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  7 20:11 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  7 20:11 test-1701979893588371000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh cat /mount-9p/test-1701979893588371000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-469000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [0f768b03-0e91-4025-a50a-10fe95e5e1c0] Pending
helpers_test.go:344: "busybox-mount" [0f768b03-0e91-4025-a50a-10fe95e5e1c0] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [0f768b03-0e91-4025-a50a-10fe95e5e1c0] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [0f768b03-0e91-4025-a50a-10fe95e5e1c0] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.007176458s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-469000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-469000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port3280480482/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-469000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port15898902/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-469000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (63.724375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-469000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (60.833167ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-469000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (63.025875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-469000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port15898902/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-469000 ssh "sudo umount -f /mount-9p": exit status 1 (62.570125ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-469000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-469000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port15898902/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.93s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-469000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1274315149/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-469000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1274315149/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-469000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1274315149/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-469000 ssh "findmnt -T" /mount1: exit status 1 (80.891625ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-469000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-469000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-469000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1274315149/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-469000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1274315149/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-469000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1274315149/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.90s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.11s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-469000
--- PASS: TestFunctional/delete_addon-resizer_images (0.11s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-469000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-469000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (30.86s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-203000 --driver=qemu2 
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-203000 --driver=qemu2 : (30.8562525s)
--- PASS: TestImageBuild/serial/Setup (30.86s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.61s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-203000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-203000: (1.611202917s)
--- PASS: TestImageBuild/serial/NormalBuild (1.61s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.15s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-203000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.15s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-203000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (70.27s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-arm64 start -p ingress-addon-legacy-427000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-arm64 start -p ingress-addon-legacy-427000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 : (1m10.274041625s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (70.27s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (15.35s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-427000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-427000 addons enable ingress --alsologtostderr -v=5: (15.347742792s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (15.35s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.25s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-427000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.25s)

                                                
                                    
x
+
TestJSONOutput/start/Command (43.82s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-340000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 start -p json-output-340000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : (43.822819083s)
--- PASS: TestJSONOutput/start/Command (43.82s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.28s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-340000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.28s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.23s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-340000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.23s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.08s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-340000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-340000 --output=json --user=testUser: (12.080939084s)
--- PASS: TestJSONOutput/stop/Command (12.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-063000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-063000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.101084ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2664d6a1-7c9b-4be8-98f9-1bda34738207","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-063000] minikube v1.32.0 on Darwin 14.1.2 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b3765cb8-440b-4b47-8a8c-2a1c5ca95208","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17719"}}
	{"specversion":"1.0","id":"66214064-2f2f-4343-8650-a5b95dd36454","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig"}}
	{"specversion":"1.0","id":"fd594365-cc37-4d01-9576-004c6abf2716","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"306b3de0-7983-49bc-86bf-07f58ae5ad0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0f7b180d-9750-409d-b831-6b351a694e02","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube"}}
	{"specversion":"1.0","id":"35c79b8c-1c01-4305-9dc6-2d25e6428992","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"40fee384-e5a0-43d0-ae3d-2ec928124882","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-063000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-063000
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestMinikubeProfile (64.69s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-110000 --driver=qemu2 
E1207 12:15:52.844777    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/functional-469000/client.crt: no such file or directory
E1207 12:15:52.851118    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/functional-469000/client.crt: no such file or directory
E1207 12:15:52.863183    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/functional-469000/client.crt: no such file or directory
E1207 12:15:52.885226    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/functional-469000/client.crt: no such file or directory
E1207 12:15:52.927276    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/functional-469000/client.crt: no such file or directory
E1207 12:15:53.009323    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/functional-469000/client.crt: no such file or directory
E1207 12:15:53.171366    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/functional-469000/client.crt: no such file or directory
E1207 12:15:53.493411    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/functional-469000/client.crt: no such file or directory
E1207 12:15:54.135477    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/functional-469000/client.crt: no such file or directory
E1207 12:15:55.417546    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/functional-469000/client.crt: no such file or directory
E1207 12:15:57.979594    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/functional-469000/client.crt: no such file or directory
E1207 12:16:03.100429    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/functional-469000/client.crt: no such file or directory
E1207 12:16:13.342295    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/functional-469000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-110000 --driver=qemu2 : (30.737439083s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-111000 --driver=qemu2 
E1207 12:16:33.823950    1768 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/17719-1328/.minikube/profiles/functional-469000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-111000 --driver=qemu2 : (33.101915042s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-110000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-111000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-111000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-111000
helpers_test.go:175: Cleaning up "first-110000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-110000
--- PASS: TestMinikubeProfile (64.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-057000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-057000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (101.874167ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-057000] minikube v1.32.0 on Darwin 14.1.2 (arm64)
	  - MINIKUBE_LOCATION=17719
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17719-1328/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17719-1328/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-057000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-057000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (44.66775ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-057000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-057000
--- PASS: TestNoKubernetes/serial/Stop (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-057000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-057000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (43.587334ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-057000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-643000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-643000 -n old-k8s-version-643000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-643000 -n old-k8s-version-643000: exit status 7 (31.740916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-643000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-052000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/no-preload/serial/Stop (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-052000 -n no-preload-052000: exit status 7 (31.431083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-052000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-820000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/embed-certs/serial/Stop (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-820000 -n embed-certs-820000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-820000 -n embed-certs-820000: exit status 7 (32.319375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-820000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-986000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-986000 -n default-k8s-diff-port-986000: exit status 7 (31.555542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-986000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-049000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-049000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/newest-cni/serial/Stop (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-049000 -n newest-cni-049000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-049000 -n newest-cni-049000: exit status 7 (31.973708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-049000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/266)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.1/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:443: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-676000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-676000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-676000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-676000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-676000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-676000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-676000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-676000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-676000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-676000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-676000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-676000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-676000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-676000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-676000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-676000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-676000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-676000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-676000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-676000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-676000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-676000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-676000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-676000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-676000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-676000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-676000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-676000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-676000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-676000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-676000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-676000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-676000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-676000"

                                                
                                                
----------------------- debugLogs end: cilium-676000 [took: 2.2346075s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-676000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-676000
--- SKIP: TestNetworkPlugins/group/cilium (2.47s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-484000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-484000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard