Test Report: QEMU_macOS 15074

                    
                      0bb29fe744a5c7c8bbbb0deb1ac8f2e2fc2fbd4c:2023-06-10:29641
                    
                

Test fail (87/242)

Order failed test Duration
3 TestDownloadOnly/v1.16.0/json-events 28.64
7 TestDownloadOnly/v1.16.0/kubectl 0
20 TestOffline 10.3
22 TestAddons/Setup 81.56
23 TestCertOptions 10.5
24 TestCertExpiration 196.11
25 TestDockerFlags 10.45
26 TestForceSystemdFlag 11.07
27 TestForceSystemdEnv 9.83
70 TestFunctional/parallel/ServiceCmdConnect 43.24
114 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
137 TestImageBuild/serial/BuildWithBuildArg 1.05
146 TestIngressAddonLegacy/serial/ValidateIngressAddons 56.11
181 TestMountStart/serial/StartWithMountFirst 10.3
184 TestMultiNode/serial/FreshStart2Nodes 10.12
185 TestMultiNode/serial/DeployApp2Nodes 103.19
186 TestMultiNode/serial/PingHostFrom2Pods 0.08
187 TestMultiNode/serial/AddNode 0.07
188 TestMultiNode/serial/ProfileList 0.11
189 TestMultiNode/serial/CopyFile 0.06
190 TestMultiNode/serial/StopNode 0.13
191 TestMultiNode/serial/StartAfterStop 0.1
192 TestMultiNode/serial/RestartKeepsNodes 5.36
193 TestMultiNode/serial/DeleteNode 0.1
194 TestMultiNode/serial/StopMultiNode 0.15
195 TestMultiNode/serial/RestartMultiNode 5.24
196 TestMultiNode/serial/ValidateNameConflict 20.34
200 TestPreload 10.44
202 TestScheduledStopUnix 9.98
203 TestSkaffold 14.15
206 TestRunningBinaryUpgrade 168.32
208 TestKubernetesUpgrade 15.57
221 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.43
222 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.02
223 TestStoppedBinaryUpgrade/Setup 138.95
225 TestPause/serial/Start 10.22
235 TestNoKubernetes/serial/StartWithK8s 9.87
236 TestNoKubernetes/serial/StartWithStopK8s 5.47
237 TestNoKubernetes/serial/Start 5.46
241 TestNoKubernetes/serial/StartNoArgs 5.46
243 TestNetworkPlugins/group/auto/Start 9.82
244 TestNetworkPlugins/group/kindnet/Start 9.82
245 TestNetworkPlugins/group/flannel/Start 9.94
246 TestNetworkPlugins/group/enable-default-cni/Start 9.74
247 TestNetworkPlugins/group/bridge/Start 9.71
248 TestNetworkPlugins/group/kubenet/Start 9.93
249 TestNetworkPlugins/group/custom-flannel/Start 9.83
250 TestNetworkPlugins/group/calico/Start 9.77
251 TestNetworkPlugins/group/false/Start 9.67
252 TestStoppedBinaryUpgrade/Upgrade 2.3
253 TestStoppedBinaryUpgrade/MinikubeLogs 0.12
255 TestStartStop/group/old-k8s-version/serial/FirstStart 11.9
257 TestStartStop/group/no-preload/serial/FirstStart 9.87
258 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
259 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
262 TestStartStop/group/old-k8s-version/serial/SecondStart 5.23
263 TestStartStop/group/no-preload/serial/DeployApp 0.09
264 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.11
267 TestStartStop/group/no-preload/serial/SecondStart 5.25
268 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
269 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
270 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
271 TestStartStop/group/old-k8s-version/serial/Pause 0.1
273 TestStartStop/group/embed-certs/serial/FirstStart 10.2
274 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
275 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
276 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
277 TestStartStop/group/no-preload/serial/Pause 0.1
279 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.75
280 TestStartStop/group/embed-certs/serial/DeployApp 0.09
281 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
284 TestStartStop/group/embed-certs/serial/SecondStart 5.21
285 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
286 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
289 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.24
290 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
291 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
292 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
293 TestStartStop/group/embed-certs/serial/Pause 0.1
295 TestStartStop/group/newest-cni/serial/FirstStart 10.1
296 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
297 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.05
298 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
299 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
304 TestStartStop/group/newest-cni/serial/SecondStart 5.24
307 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
308 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.16.0/json-events (28.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-414000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-414000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=qemu2 : exit status 40 (28.637592542s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"331857a1-08eb-4127-a69b-3a48da579b6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-414000] minikube v1.30.1 on Darwin 13.4 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a6013e16-dff3-49f8-b1f2-578a3150d7db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15074"}}
	{"specversion":"1.0","id":"495487c5-bc11-400d-a921-91771e9573f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig"}}
	{"specversion":"1.0","id":"34f24569-de90-47bb-a016-0b1a3751604e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"a550978d-5110-458d-8b1a-f0cc084010c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a37f9d6f-ae55-46aa-9b20-33da3912578d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube"}}
	{"specversion":"1.0","id":"ffbb41ac-fffe-442f-a789-38cda915295b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"a00807f1-f11d-4a6f-bf9d-d75bdab79629","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3fa74cb0-a1e9-4736-b02f-b61f5938e744","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"c09273d8-2551-4cd7-acd9-9664ae12a042","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"cfef4b61-0fce-44b9-8b5c-817803b3d8ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node download-only-414000 in cluster download-only-414000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"768bcce3-b38d-4ceb-9d82-516fff892638","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.16.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"cab95437-8bbc-463d-a485-bf06b07f4f51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/15074-894/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106703f28 0x106703f28 0x106703f28 0x106703f28 0x106703f28 0x106703f28 0x106703f28] Decompressors:map[bz2:0x140001e78d8 gz:0x140001e7a30 tar:0x140001e78e0 tar.bz2:0x140001e78f0 tar.gz:0x140001e7900 tar.xz:0x140001e7a10 tar.zst:0x140001e7a20 tbz2:0x140001e78f0 tgz:0x140001e
7900 txz:0x140001e7a10 tzst:0x140001e7a20 xz:0x140001e7a38 zip:0x140001e7a40 zst:0x140001e7ac0] Getters:map[file:0x14000e8ca70 http:0x14000a2aa50 https:0x14000a2aaa0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"859bc6a0-ef6e-41dd-80b0-7848a27fabe5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:03:11.789006    1338 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:03:11.789123    1338 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:03:11.789126    1338 out.go:309] Setting ErrFile to fd 2...
	I0610 07:03:11.789128    1338 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:03:11.789193    1338 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	W0610 07:03:11.789253    1338 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/15074-894/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15074-894/.minikube/config/config.json: no such file or directory
	I0610 07:03:11.790357    1338 out.go:303] Setting JSON to true
	I0610 07:03:11.806625    1338 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":161,"bootTime":1686405630,"procs":395,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:03:11.806685    1338 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:03:11.813344    1338 out.go:97] [download-only-414000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:03:11.816336    1338 out.go:169] MINIKUBE_LOCATION=15074
	W0610 07:03:11.813483    1338 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball: no such file or directory
	I0610 07:03:11.813484    1338 notify.go:220] Checking for updates...
	I0610 07:03:11.826292    1338 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:03:11.829335    1338 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:03:11.830751    1338 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:03:11.834279    1338 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	W0610 07:03:11.840286    1338 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0610 07:03:11.840474    1338 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:03:11.845394    1338 out.go:97] Using the qemu2 driver based on user configuration
	I0610 07:03:11.845414    1338 start.go:297] selected driver: qemu2
	I0610 07:03:11.845429    1338 start.go:875] validating driver "qemu2" against <nil>
	I0610 07:03:11.845494    1338 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 07:03:11.849305    1338 out.go:169] Automatically selected the socket_vmnet network
	I0610 07:03:11.854784    1338 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0610 07:03:11.854879    1338 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 07:03:11.854916    1338 cni.go:84] Creating CNI manager for ""
	I0610 07:03:11.854932    1338 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0610 07:03:11.854937    1338 start_flags.go:319] config:
	{Name:download-only-414000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-414000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:03:11.855112    1338 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:03:11.859276    1338 out.go:97] Downloading VM boot image ...
	I0610 07:03:11.859295    1338 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso
	I0610 07:03:27.569360    1338 out.go:97] Starting control plane node download-only-414000 in cluster download-only-414000
	I0610 07:03:27.569385    1338 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0610 07:03:27.675276    1338 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0610 07:03:27.675340    1338 cache.go:57] Caching tarball of preloaded images
	I0610 07:03:27.675546    1338 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0610 07:03:27.679639    1338 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0610 07:03:27.679648    1338 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0610 07:03:27.904149    1338 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0610 07:03:39.125180    1338 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0610 07:03:39.125326    1338 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0610 07:03:39.773759    1338 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0610 07:03:39.773947    1338 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/download-only-414000/config.json ...
	I0610 07:03:39.773965    1338 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/download-only-414000/config.json: {Name:mk7f5c6cd72cdeb7e4eb06700a00aabcc940f64e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:03:39.774207    1338 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0610 07:03:39.774372    1338 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/15074-894/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0610 07:03:40.357149    1338 out.go:169] 
	W0610 07:03:40.361191    1338 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/15074-894/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106703f28 0x106703f28 0x106703f28 0x106703f28 0x106703f28 0x106703f28 0x106703f28] Decompressors:map[bz2:0x140001e78d8 gz:0x140001e7a30 tar:0x140001e78e0 tar.bz2:0x140001e78f0 tar.gz:0x140001e7900 tar.xz:0x140001e7a10 tar.zst:0x140001e7a20 tbz2:0x140001e78f0 tgz:0x140001e7900 txz:0x140001e7a10 tzst:0x140001e7a20 xz:0x140001e7a38 zip:0x140001e7a40 zst:0x140001e7ac0] Getters:map[file:0x14000e8ca70 http:0x14000a2aa50 https:0x14000a2aaa0] Dir:false ProgressListener:
<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0610 07:03:40.361222    1338 out_reason.go:110] 
	W0610 07:03:40.368104    1338 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:03:40.372166    1338 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:71: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-414000" "--force" "--alsologtostderr" "--kubernetes-version=v1.16.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.16.0/json-events (28.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:160: expected the file for binary exist at "/Users/jenkins/minikube-integration/15074-894/.minikube/cache/darwin/arm64/v1.16.0/kubectl" but got error stat /Users/jenkins/minikube-integration/15074-894/.minikube/cache/darwin/arm64/v1.16.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.3s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-569000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-569000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (10.156589375s)

                                                
                                                
-- stdout --
	* [offline-docker-569000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node offline-docker-569000 in cluster offline-docker-569000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-569000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:17:47.095714    2785 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:17:47.095823    2785 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:17:47.095826    2785 out.go:309] Setting ErrFile to fd 2...
	I0610 07:17:47.095828    2785 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:17:47.095897    2785 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:17:47.096935    2785 out.go:303] Setting JSON to false
	I0610 07:17:47.113410    2785 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1037,"bootTime":1686405630,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:17:47.113516    2785 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:17:47.117893    2785 out.go:177] * [offline-docker-569000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:17:47.125824    2785 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:17:47.125850    2785 notify.go:220] Checking for updates...
	I0610 07:17:47.132711    2785 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:17:47.135787    2785 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:17:47.138784    2785 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:17:47.141761    2785 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:17:47.144721    2785 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:17:47.147994    2785 config.go:182] Loaded profile config "multinode-214000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:17:47.148033    2785 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:17:47.151781    2785 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 07:17:47.158713    2785 start.go:297] selected driver: qemu2
	I0610 07:17:47.158719    2785 start.go:875] validating driver "qemu2" against <nil>
	I0610 07:17:47.158726    2785 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:17:47.160556    2785 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 07:17:47.163792    2785 out.go:177] * Automatically selected the socket_vmnet network
	I0610 07:17:47.166861    2785 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 07:17:47.166882    2785 cni.go:84] Creating CNI manager for ""
	I0610 07:17:47.166889    2785 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 07:17:47.166894    2785 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 07:17:47.166900    2785 start_flags.go:319] config:
	{Name:offline-docker-569000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:offline-docker-569000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:17:47.166995    2785 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:17:47.169678    2785 out.go:177] * Starting control plane node offline-docker-569000 in cluster offline-docker-569000
	I0610 07:17:47.177572    2785 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:17:47.177602    2785 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 07:17:47.177614    2785 cache.go:57] Caching tarball of preloaded images
	I0610 07:17:47.177680    2785 preload.go:174] Found /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 07:17:47.177685    2785 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 07:17:47.177742    2785 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/offline-docker-569000/config.json ...
	I0610 07:17:47.177753    2785 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/offline-docker-569000/config.json: {Name:mk9bbe34a6c3cea5a42fe02622401f6c3c89c609 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:17:47.177976    2785 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:17:47.177990    2785 start.go:364] acquiring machines lock for offline-docker-569000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:17:47.178016    2785 start.go:368] acquired machines lock for "offline-docker-569000" in 21µs
	I0610 07:17:47.178025    2785 start.go:93] Provisioning new machine with config: &{Name:offline-docker-569000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.2 ClusterName:offline-docker-569000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:17:47.178051    2785 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:17:47.182728    2785 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 07:17:47.197333    2785 start.go:159] libmachine.API.Create for "offline-docker-569000" (driver="qemu2")
	I0610 07:17:47.197361    2785 client.go:168] LocalClient.Create starting
	I0610 07:17:47.197422    2785 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:17:47.197442    2785 main.go:141] libmachine: Decoding PEM data...
	I0610 07:17:47.197451    2785 main.go:141] libmachine: Parsing certificate...
	I0610 07:17:47.197492    2785 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:17:47.197507    2785 main.go:141] libmachine: Decoding PEM data...
	I0610 07:17:47.197516    2785 main.go:141] libmachine: Parsing certificate...
	I0610 07:17:47.197830    2785 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:17:47.358065    2785 main.go:141] libmachine: Creating SSH key...
	I0610 07:17:47.530926    2785 main.go:141] libmachine: Creating Disk image...
	I0610 07:17:47.530937    2785 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:17:47.531146    2785 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/offline-docker-569000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/offline-docker-569000/disk.qcow2
	I0610 07:17:47.540158    2785 main.go:141] libmachine: STDOUT: 
	I0610 07:17:47.540174    2785 main.go:141] libmachine: STDERR: 
	I0610 07:17:47.540243    2785 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/offline-docker-569000/disk.qcow2 +20000M
	I0610 07:17:47.548766    2785 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:17:47.548790    2785 main.go:141] libmachine: STDERR: 
	I0610 07:17:47.548814    2785 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/offline-docker-569000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/offline-docker-569000/disk.qcow2
	I0610 07:17:47.548819    2785 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:17:47.548865    2785 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/offline-docker-569000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/offline-docker-569000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/offline-docker-569000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:95:e7:eb:27:c2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/offline-docker-569000/disk.qcow2
	I0610 07:17:47.550533    2785 main.go:141] libmachine: STDOUT: 
	I0610 07:17:47.550545    2785 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:17:47.550563    2785 client.go:171] LocalClient.Create took 353.208916ms
	I0610 07:17:49.552566    2785 start.go:128] duration metric: createHost completed in 2.374581708s
	I0610 07:17:49.552596    2785 start.go:83] releasing machines lock for "offline-docker-569000", held for 2.374651041s
	W0610 07:17:49.552612    2785 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:17:49.559770    2785 out.go:177] * Deleting "offline-docker-569000" in qemu2 ...
	W0610 07:17:49.573341    2785 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:17:49.573351    2785 start.go:702] Will try again in 5 seconds ...
	I0610 07:17:54.575430    2785 start.go:364] acquiring machines lock for offline-docker-569000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:17:54.575899    2785 start.go:368] acquired machines lock for "offline-docker-569000" in 368.416µs
	I0610 07:17:54.576050    2785 start.go:93] Provisioning new machine with config: &{Name:offline-docker-569000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.2 ClusterName:offline-docker-569000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:17:54.576403    2785 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:17:54.581316    2785 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 07:17:54.629682    2785 start.go:159] libmachine.API.Create for "offline-docker-569000" (driver="qemu2")
	I0610 07:17:54.629718    2785 client.go:168] LocalClient.Create starting
	I0610 07:17:54.629853    2785 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:17:54.629899    2785 main.go:141] libmachine: Decoding PEM data...
	I0610 07:17:54.629926    2785 main.go:141] libmachine: Parsing certificate...
	I0610 07:17:54.630009    2785 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:17:54.630038    2785 main.go:141] libmachine: Decoding PEM data...
	I0610 07:17:54.630057    2785 main.go:141] libmachine: Parsing certificate...
	I0610 07:17:54.630603    2785 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:17:54.754617    2785 main.go:141] libmachine: Creating SSH key...
	I0610 07:17:55.159490    2785 main.go:141] libmachine: Creating Disk image...
	I0610 07:17:55.159501    2785 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:17:55.159676    2785 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/offline-docker-569000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/offline-docker-569000/disk.qcow2
	I0610 07:17:55.168573    2785 main.go:141] libmachine: STDOUT: 
	I0610 07:17:55.168588    2785 main.go:141] libmachine: STDERR: 
	I0610 07:17:55.168640    2785 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/offline-docker-569000/disk.qcow2 +20000M
	I0610 07:17:55.175999    2785 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:17:55.176016    2785 main.go:141] libmachine: STDERR: 
	I0610 07:17:55.176033    2785 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/offline-docker-569000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/offline-docker-569000/disk.qcow2
	I0610 07:17:55.176038    2785 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:17:55.176077    2785 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/offline-docker-569000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/offline-docker-569000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/offline-docker-569000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:e4:80:32:d5:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/offline-docker-569000/disk.qcow2
	I0610 07:17:55.177575    2785 main.go:141] libmachine: STDOUT: 
	I0610 07:17:55.177587    2785 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:17:55.177601    2785 client.go:171] LocalClient.Create took 547.889167ms
	I0610 07:17:57.179607    2785 start.go:128] duration metric: createHost completed in 2.603271167s
	I0610 07:17:57.179629    2785 start.go:83] releasing machines lock for "offline-docker-569000", held for 2.603793666s
	W0610 07:17:57.179723    2785 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-569000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-569000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:17:57.187935    2785 out.go:177] 
	W0610 07:17:57.191918    2785 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:17:57.191934    2785 out.go:239] * 
	* 
	W0610 07:17:57.192456    2785 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:17:57.214821    2785 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-569000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:522: *** TestOffline FAILED at 2023-06-10 07:17:57.224498 -0700 PDT m=+885.533023709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-569000 -n offline-docker-569000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-569000 -n offline-docker-569000: exit status 7 (31.173458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-569000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-569000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-569000
--- FAIL: TestOffline (10.30s)

                                                
                                    
x
+
TestAddons/Setup (81.56s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-912000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:88: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-912000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 90 (1m21.554166625s)

                                                
                                                
-- stdout --
	* [addons-912000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node addons-912000 in cluster addons-912000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:03:57.299089    1419 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:03:57.299198    1419 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:03:57.299202    1419 out.go:309] Setting ErrFile to fd 2...
	I0610 07:03:57.299204    1419 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:03:57.299271    1419 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:03:57.300364    1419 out.go:303] Setting JSON to false
	I0610 07:03:57.315467    1419 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":207,"bootTime":1686405630,"procs":393,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:03:57.315533    1419 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:03:57.323298    1419 out.go:177] * [addons-912000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:03:57.327327    1419 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:03:57.327384    1419 notify.go:220] Checking for updates...
	I0610 07:03:57.330260    1419 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:03:57.333315    1419 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:03:57.336340    1419 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:03:57.339301    1419 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:03:57.342275    1419 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:03:57.345421    1419 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:03:57.349287    1419 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 07:03:57.356303    1419 start.go:297] selected driver: qemu2
	I0610 07:03:57.356308    1419 start.go:875] validating driver "qemu2" against <nil>
	I0610 07:03:57.356317    1419 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:03:57.358115    1419 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 07:03:57.361822    1419 out.go:177] * Automatically selected the socket_vmnet network
	I0610 07:03:57.364803    1419 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 07:03:57.364852    1419 cni.go:84] Creating CNI manager for ""
	I0610 07:03:57.364867    1419 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 07:03:57.364883    1419 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 07:03:57.364897    1419 start_flags.go:319] config:
	{Name:addons-912000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:addons-912000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:03:57.365150    1419 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:03:57.373261    1419 out.go:177] * Starting control plane node addons-912000 in cluster addons-912000
	I0610 07:03:57.377265    1419 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:03:57.377298    1419 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 07:03:57.377309    1419 cache.go:57] Caching tarball of preloaded images
	I0610 07:03:57.377372    1419 preload.go:174] Found /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 07:03:57.377377    1419 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 07:03:57.377545    1419 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/addons-912000/config.json ...
	I0610 07:03:57.377557    1419 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/addons-912000/config.json: {Name:mk37ccfac85acd4dafc86263b0a3b12f3846f0bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:03:57.377734    1419 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:03:57.377755    1419 start.go:364] acquiring machines lock for addons-912000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:03:57.377813    1419 start.go:368] acquired machines lock for "addons-912000" in 52.75µs
	I0610 07:03:57.377834    1419 start.go:93] Provisioning new machine with config: &{Name:addons-912000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:addons-912000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:03:57.377869    1419 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:03:57.386590    1419 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0610 07:03:57.720373    1419 start.go:159] libmachine.API.Create for "addons-912000" (driver="qemu2")
	I0610 07:03:57.720428    1419 client.go:168] LocalClient.Create starting
	I0610 07:03:57.720565    1419 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:03:57.944613    1419 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:03:58.075742    1419 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:03:58.351559    1419 main.go:141] libmachine: Creating SSH key...
	I0610 07:03:58.470920    1419 main.go:141] libmachine: Creating Disk image...
	I0610 07:03:58.470926    1419 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:03:58.478119    1419 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/addons-912000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/addons-912000/disk.qcow2
	I0610 07:03:58.575645    1419 main.go:141] libmachine: STDOUT: 
	I0610 07:03:58.575674    1419 main.go:141] libmachine: STDERR: 
	I0610 07:03:58.575738    1419 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/addons-912000/disk.qcow2 +20000M
	I0610 07:03:58.582930    1419 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:03:58.582943    1419 main.go:141] libmachine: STDERR: 
	I0610 07:03:58.582961    1419 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/addons-912000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/addons-912000/disk.qcow2
	I0610 07:03:58.582979    1419 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:03:58.583019    1419 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/addons-912000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/addons-912000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/addons-912000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:48:cb:84:48:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/addons-912000/disk.qcow2
	I0610 07:03:58.672669    1419 main.go:141] libmachine: STDOUT: 
	I0610 07:03:58.672707    1419 main.go:141] libmachine: STDERR: 
	I0610 07:03:58.672713    1419 main.go:141] libmachine: Attempt 0
	I0610 07:03:58.672730    1419 main.go:141] libmachine: Searching for e2:48:cb:84:48:b2 in /var/db/dhcpd_leases ...
	I0610 07:04:00.674849    1419 main.go:141] libmachine: Attempt 1
	I0610 07:04:00.674950    1419 main.go:141] libmachine: Searching for e2:48:cb:84:48:b2 in /var/db/dhcpd_leases ...
	I0610 07:04:02.677089    1419 main.go:141] libmachine: Attempt 2
	I0610 07:04:02.677129    1419 main.go:141] libmachine: Searching for e2:48:cb:84:48:b2 in /var/db/dhcpd_leases ...
	I0610 07:04:04.679142    1419 main.go:141] libmachine: Attempt 3
	I0610 07:04:04.679154    1419 main.go:141] libmachine: Searching for e2:48:cb:84:48:b2 in /var/db/dhcpd_leases ...
	I0610 07:04:06.681127    1419 main.go:141] libmachine: Attempt 4
	I0610 07:04:06.681134    1419 main.go:141] libmachine: Searching for e2:48:cb:84:48:b2 in /var/db/dhcpd_leases ...
	I0610 07:04:08.683160    1419 main.go:141] libmachine: Attempt 5
	I0610 07:04:08.683197    1419 main.go:141] libmachine: Searching for e2:48:cb:84:48:b2 in /var/db/dhcpd_leases ...
	I0610 07:04:10.685356    1419 main.go:141] libmachine: Attempt 6
	I0610 07:04:10.685430    1419 main.go:141] libmachine: Searching for e2:48:cb:84:48:b2 in /var/db/dhcpd_leases ...
	I0610 07:04:10.685857    1419 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0610 07:04:10.685967    1419 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:e2:48:cb:84:48:b2 ID:1,e2:48:cb:84:48:b2 Lease:0x6485d459}
	I0610 07:04:10.686011    1419 main.go:141] libmachine: Found match: e2:48:cb:84:48:b2
	I0610 07:04:10.686052    1419 main.go:141] libmachine: IP: 192.168.105.2
	I0610 07:04:10.686079    1419 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0610 07:04:12.707659    1419 machine.go:88] provisioning docker machine ...
	I0610 07:04:12.707740    1419 buildroot.go:166] provisioning hostname "addons-912000"
	I0610 07:04:12.708550    1419 main.go:141] libmachine: Using SSH client type: native
	I0610 07:04:12.709699    1419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d906d0] 0x104d93130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 07:04:12.709727    1419 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-912000 && echo "addons-912000" | sudo tee /etc/hostname
	I0610 07:04:12.801224    1419 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-912000
	
	I0610 07:04:12.801343    1419 main.go:141] libmachine: Using SSH client type: native
	I0610 07:04:12.801863    1419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d906d0] 0x104d93130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 07:04:12.801882    1419 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-912000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-912000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-912000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 07:04:12.874882    1419 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 07:04:12.874899    1419 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15074-894/.minikube CaCertPath:/Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15074-894/.minikube}
	I0610 07:04:12.874918    1419 buildroot.go:174] setting up certificates
	I0610 07:04:12.874943    1419 provision.go:83] configureAuth start
	I0610 07:04:12.874959    1419 provision.go:138] copyHostCerts
	I0610 07:04:12.875151    1419 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15074-894/.minikube/ca.pem (1078 bytes)
	I0610 07:04:12.876184    1419 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15074-894/.minikube/cert.pem (1123 bytes)
	I0610 07:04:12.876616    1419 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15074-894/.minikube/key.pem (1679 bytes)
	I0610 07:04:12.876831    1419 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15074-894/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca-key.pem org=jenkins.addons-912000 san=[192.168.105.2 192.168.105.2 localhost 127.0.0.1 minikube addons-912000]
	I0610 07:04:13.058029    1419 provision.go:172] copyRemoteCerts
	I0610 07:04:13.058109    1419 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 07:04:13.058142    1419 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/addons-912000/id_rsa Username:docker}
	I0610 07:04:13.091778    1419 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0610 07:04:13.099028    1419 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0610 07:04:13.105797    1419 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 07:04:13.113399    1419 provision.go:86] duration metric: configureAuth took 238.451208ms
	I0610 07:04:13.113410    1419 buildroot.go:189] setting minikube options for container-runtime
	I0610 07:04:13.114187    1419 config.go:182] Loaded profile config "addons-912000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:04:13.114233    1419 main.go:141] libmachine: Using SSH client type: native
	I0610 07:04:13.114462    1419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d906d0] 0x104d93130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 07:04:13.114467    1419 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 07:04:13.172418    1419 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 07:04:13.172426    1419 buildroot.go:70] root file system type: tmpfs
	I0610 07:04:13.172482    1419 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 07:04:13.172528    1419 main.go:141] libmachine: Using SSH client type: native
	I0610 07:04:13.172756    1419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d906d0] 0x104d93130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 07:04:13.172792    1419 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 07:04:13.234406    1419 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 07:04:13.234456    1419 main.go:141] libmachine: Using SSH client type: native
	I0610 07:04:13.234713    1419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d906d0] 0x104d93130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 07:04:13.234723    1419 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 07:04:13.585397    1419 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 07:04:13.585411    1419 machine.go:91] provisioned docker machine in 877.750208ms
	I0610 07:04:13.585416    1419 client.go:171] LocalClient.Create took 15.86552025s
	I0610 07:04:13.585429    1419 start.go:167] duration metric: libmachine.API.Create for "addons-912000" took 15.865597792s
	I0610 07:04:13.585434    1419 start.go:300] post-start starting for "addons-912000" (driver="qemu2")
	I0610 07:04:13.585437    1419 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 07:04:13.585517    1419 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 07:04:13.585526    1419 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/addons-912000/id_rsa Username:docker}
	I0610 07:04:13.616345    1419 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 07:04:13.617708    1419 info.go:137] Remote host: Buildroot 2021.02.12
	I0610 07:04:13.617716    1419 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15074-894/.minikube/addons for local assets ...
	I0610 07:04:13.617773    1419 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15074-894/.minikube/files for local assets ...
	I0610 07:04:13.617795    1419 start.go:303] post-start completed in 32.359917ms
	I0610 07:04:13.618114    1419 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/addons-912000/config.json ...
	I0610 07:04:13.618261    1419 start.go:128] duration metric: createHost completed in 16.24093775s
	I0610 07:04:13.618282    1419 main.go:141] libmachine: Using SSH client type: native
	I0610 07:04:13.618499    1419 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d906d0] 0x104d93130 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0610 07:04:13.618503    1419 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0610 07:04:13.678162    1419 main.go:141] libmachine: SSH cmd err, output: <nil>: 1686405853.520667585
	
	I0610 07:04:13.678169    1419 fix.go:207] guest clock: 1686405853.520667585
	I0610 07:04:13.678173    1419 fix.go:220] Guest: 2023-06-10 07:04:13.520667585 -0700 PDT Remote: 2023-06-10 07:04:13.618264 -0700 PDT m=+16.337986918 (delta=-97.596415ms)
	I0610 07:04:13.678183    1419 fix.go:191] guest clock delta is within tolerance: -97.596415ms
	I0610 07:04:13.678186    1419 start.go:83] releasing machines lock for "addons-912000", held for 16.300920167s
	I0610 07:04:13.678477    1419 ssh_runner.go:195] Run: cat /version.json
	I0610 07:04:13.678485    1419 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/addons-912000/id_rsa Username:docker}
	I0610 07:04:13.678513    1419 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 07:04:13.678529    1419 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/addons-912000/id_rsa Username:docker}
	I0610 07:04:13.750389    1419 ssh_runner.go:195] Run: systemctl --version
	I0610 07:04:13.752507    1419 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 07:04:13.754426    1419 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 07:04:13.754456    1419 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 07:04:13.767565    1419 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 07:04:13.767575    1419 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:04:13.767687    1419 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 07:04:13.781935    1419 docker.go:633] Got preloaded images: 
	I0610 07:04:13.781945    1419 docker.go:639] registry.k8s.io/kube-apiserver:v1.27.2 wasn't preloaded
	I0610 07:04:13.781993    1419 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 07:04:13.785759    1419 ssh_runner.go:195] Run: which lz4
	I0610 07:04:13.787166    1419 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0610 07:04:13.788376    1419 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 07:04:13.788390    1419 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635669 bytes)
	I0610 07:04:15.087463    1419 docker.go:597] Took 1.300370 seconds to copy over tarball
	I0610 07:04:15.087531    1419 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 07:04:16.215682    1419 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.12816425s)
	I0610 07:04:16.215701    1419 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 07:04:16.230905    1419 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 07:04:16.233927    1419 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0610 07:04:16.239169    1419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 07:04:16.330382    1419 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 07:04:17.494754    1419 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.164395875s)
	I0610 07:04:17.494784    1419 start.go:481] detecting cgroup driver to use...
	I0610 07:04:17.494925    1419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 07:04:17.500200    1419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 07:04:17.503169    1419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 07:04:17.506120    1419 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 07:04:17.506143    1419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 07:04:17.509182    1419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 07:04:17.512457    1419 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 07:04:17.515741    1419 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 07:04:17.518849    1419 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 07:04:17.521740    1419 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 07:04:17.525039    1419 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 07:04:17.528436    1419 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 07:04:17.531144    1419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 07:04:17.607393    1419 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 07:04:17.617595    1419 start.go:481] detecting cgroup driver to use...
	I0610 07:04:17.617681    1419 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 07:04:17.624460    1419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 07:04:17.634615    1419 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 07:04:17.643852    1419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 07:04:17.648596    1419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 07:04:17.653529    1419 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 07:04:17.708682    1419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 07:04:17.714547    1419 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 07:04:17.720616    1419 ssh_runner.go:195] Run: which cri-dockerd
	I0610 07:04:17.722165    1419 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 07:04:17.725065    1419 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 07:04:17.730216    1419 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 07:04:17.806903    1419 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 07:04:17.882109    1419 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 07:04:17.882126    1419 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0610 07:04:17.887283    1419 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 07:04:17.963581    1419 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 07:05:18.792167    1419 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.830596334s)
	I0610 07:05:18.797227    1419 out.go:177] 
	W0610 07:05:18.801327    1419 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	W0610 07:05:18.801356    1419 out.go:239] * 
	* 
	W0610 07:05:18.803230    1419 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:05:18.815200    1419 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:90: out/minikube-darwin-arm64 start -p addons-912000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 90
--- FAIL: TestAddons/Setup (81.56s)

                                                
                                    
x
+
TestCertOptions (10.5s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-542000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-542000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (10.225440042s)

                                                
                                                
-- stdout --
	* [cert-options-542000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-options-542000 in cluster cert-options-542000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-542000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-542000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-542000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-542000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-542000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 89 (79.057084ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-542000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-542000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 89
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-542000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-542000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-542000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 89 (39.650083ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-542000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-542000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 89
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p cert-options-542000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2023-06-10 07:18:28.03472 -0700 PDT m=+916.344220459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-542000 -n cert-options-542000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-542000 -n cert-options-542000: exit status 7 (28.75225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-542000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-542000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-542000
--- FAIL: TestCertOptions (10.50s)

                                                
                                    
x
+
TestCertExpiration (196.11s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-587000 --memory=2048 --cert-expiration=3m --driver=qemu2 
E0610 07:18:14.352212    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/functional-922000/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-587000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.714769375s)

                                                
                                                
-- stdout --
	* [cert-expiration-587000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node cert-expiration-587000 in cluster cert-expiration-587000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-587000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-587000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-587000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-587000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-587000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.222432167s)

                                                
                                                
-- stdout --
	* [cert-expiration-587000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-587000 in cluster cert-expiration-587000
	* Restarting existing qemu2 VM for "cert-expiration-587000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-587000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-587000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-587000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-587000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node cert-expiration-587000 in cluster cert-expiration-587000
	* Restarting existing qemu2 VM for "cert-expiration-587000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-587000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-587000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2023-06-10 07:21:28.03755 -0700 PDT m=+1096.352763668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-587000 -n cert-expiration-587000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-587000 -n cert-expiration-587000: exit status 7 (67.922ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-587000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-587000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-587000
--- FAIL: TestCertExpiration (196.11s)

                                                
                                    
x
+
TestDockerFlags (10.45s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-401000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:45: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-401000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.205670875s)

                                                
                                                
-- stdout --
	* [docker-flags-401000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node docker-flags-401000 in cluster docker-flags-401000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-401000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:18:07.231272    2980 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:18:07.231398    2980 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:18:07.231401    2980 out.go:309] Setting ErrFile to fd 2...
	I0610 07:18:07.231405    2980 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:18:07.231496    2980 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:18:07.232582    2980 out.go:303] Setting JSON to false
	I0610 07:18:07.248519    2980 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1057,"bootTime":1686405630,"procs":407,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:18:07.248640    2980 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:18:07.254607    2980 out.go:177] * [docker-flags-401000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:18:07.268432    2980 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:18:07.263635    2980 notify.go:220] Checking for updates...
	I0610 07:18:07.275384    2980 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:18:07.279483    2980 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:18:07.282512    2980 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:18:07.285435    2980 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:18:07.288440    2980 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:18:07.291831    2980 config.go:182] Loaded profile config "force-systemd-flag-194000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:18:07.291899    2980 config.go:182] Loaded profile config "multinode-214000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:18:07.291948    2980 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:18:07.295452    2980 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 07:18:07.302501    2980 start.go:297] selected driver: qemu2
	I0610 07:18:07.302506    2980 start.go:875] validating driver "qemu2" against <nil>
	I0610 07:18:07.302513    2980 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:18:07.304747    2980 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 07:18:07.306128    2980 out.go:177] * Automatically selected the socket_vmnet network
	I0610 07:18:07.310489    2980 start_flags.go:914] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0610 07:18:07.310514    2980 cni.go:84] Creating CNI manager for ""
	I0610 07:18:07.310527    2980 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 07:18:07.310538    2980 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 07:18:07.310545    2980 start_flags.go:319] config:
	{Name:docker-flags-401000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:docker-flags-401000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/ru
n/socket_vmnet StaticIP:}
	I0610 07:18:07.310635    2980 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:18:07.314454    2980 out.go:177] * Starting control plane node docker-flags-401000 in cluster docker-flags-401000
	I0610 07:18:07.322445    2980 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:18:07.322470    2980 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 07:18:07.322487    2980 cache.go:57] Caching tarball of preloaded images
	I0610 07:18:07.322559    2980 preload.go:174] Found /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 07:18:07.322565    2980 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 07:18:07.322634    2980 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/docker-flags-401000/config.json ...
	I0610 07:18:07.322653    2980 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/docker-flags-401000/config.json: {Name:mkce64c2682e2cc8573d3a669faebccfc9db9b73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:18:07.322863    2980 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:18:07.322876    2980 start.go:364] acquiring machines lock for docker-flags-401000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:18:07.322910    2980 start.go:368] acquired machines lock for "docker-flags-401000" in 28.833µs
	I0610 07:18:07.322924    2980 start.go:93] Provisioning new machine with config: &{Name:docker-flags-401000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:docker-flags-401000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:18:07.322955    2980 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:18:07.331480    2980 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 07:18:07.349512    2980 start.go:159] libmachine.API.Create for "docker-flags-401000" (driver="qemu2")
	I0610 07:18:07.349540    2980 client.go:168] LocalClient.Create starting
	I0610 07:18:07.349603    2980 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:18:07.349625    2980 main.go:141] libmachine: Decoding PEM data...
	I0610 07:18:07.349636    2980 main.go:141] libmachine: Parsing certificate...
	I0610 07:18:07.349679    2980 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:18:07.349697    2980 main.go:141] libmachine: Decoding PEM data...
	I0610 07:18:07.349705    2980 main.go:141] libmachine: Parsing certificate...
	I0610 07:18:07.350026    2980 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:18:07.501866    2980 main.go:141] libmachine: Creating SSH key...
	I0610 07:18:07.544355    2980 main.go:141] libmachine: Creating Disk image...
	I0610 07:18:07.544362    2980 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:18:07.544507    2980 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/docker-flags-401000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/docker-flags-401000/disk.qcow2
	I0610 07:18:07.552991    2980 main.go:141] libmachine: STDOUT: 
	I0610 07:18:07.553006    2980 main.go:141] libmachine: STDERR: 
	I0610 07:18:07.553050    2980 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/docker-flags-401000/disk.qcow2 +20000M
	I0610 07:18:07.560268    2980 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:18:07.560285    2980 main.go:141] libmachine: STDERR: 
	I0610 07:18:07.560304    2980 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/docker-flags-401000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/docker-flags-401000/disk.qcow2
	I0610 07:18:07.560309    2980 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:18:07.560351    2980 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/docker-flags-401000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/docker-flags-401000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/docker-flags-401000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:06:4e:72:d5:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/docker-flags-401000/disk.qcow2
	I0610 07:18:07.561857    2980 main.go:141] libmachine: STDOUT: 
	I0610 07:18:07.561869    2980 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:18:07.561890    2980 client.go:171] LocalClient.Create took 212.350541ms
	I0610 07:18:09.563991    2980 start.go:128] duration metric: createHost completed in 2.2410855s
	I0610 07:18:09.564055    2980 start.go:83] releasing machines lock for "docker-flags-401000", held for 2.241206542s
	W0610 07:18:09.564145    2980 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:18:09.582199    2980 out.go:177] * Deleting "docker-flags-401000" in qemu2 ...
	W0610 07:18:09.597893    2980 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:18:09.597927    2980 start.go:702] Will try again in 5 seconds ...
	I0610 07:18:14.600066    2980 start.go:364] acquiring machines lock for docker-flags-401000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:18:14.852851    2980 start.go:368] acquired machines lock for "docker-flags-401000" in 252.66225ms
	I0610 07:18:14.853051    2980 start.go:93] Provisioning new machine with config: &{Name:docker-flags-401000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root
SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:docker-flags-401000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:18:14.853440    2980 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:18:14.864084    2980 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 07:18:14.910900    2980 start.go:159] libmachine.API.Create for "docker-flags-401000" (driver="qemu2")
	I0610 07:18:14.910943    2980 client.go:168] LocalClient.Create starting
	I0610 07:18:14.911060    2980 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:18:14.911102    2980 main.go:141] libmachine: Decoding PEM data...
	I0610 07:18:14.911117    2980 main.go:141] libmachine: Parsing certificate...
	I0610 07:18:14.911201    2980 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:18:14.911228    2980 main.go:141] libmachine: Decoding PEM data...
	I0610 07:18:14.911256    2980 main.go:141] libmachine: Parsing certificate...
	I0610 07:18:14.911779    2980 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:18:15.311697    2980 main.go:141] libmachine: Creating SSH key...
	I0610 07:18:15.344220    2980 main.go:141] libmachine: Creating Disk image...
	I0610 07:18:15.344226    2980 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:18:15.344376    2980 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/docker-flags-401000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/docker-flags-401000/disk.qcow2
	I0610 07:18:15.352896    2980 main.go:141] libmachine: STDOUT: 
	I0610 07:18:15.352911    2980 main.go:141] libmachine: STDERR: 
	I0610 07:18:15.352963    2980 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/docker-flags-401000/disk.qcow2 +20000M
	I0610 07:18:15.360019    2980 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:18:15.360033    2980 main.go:141] libmachine: STDERR: 
	I0610 07:18:15.360046    2980 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/docker-flags-401000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/docker-flags-401000/disk.qcow2
	I0610 07:18:15.360053    2980 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:18:15.360091    2980 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/docker-flags-401000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/docker-flags-401000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/docker-flags-401000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:be:85:5b:35:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/docker-flags-401000/disk.qcow2
	I0610 07:18:15.361637    2980 main.go:141] libmachine: STDOUT: 
	I0610 07:18:15.361650    2980 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:18:15.361660    2980 client.go:171] LocalClient.Create took 450.727375ms
	I0610 07:18:17.364019    2980 start.go:128] duration metric: createHost completed in 2.510531166s
	I0610 07:18:17.364126    2980 start.go:83] releasing machines lock for "docker-flags-401000", held for 2.511322958s
	W0610 07:18:17.364626    2980 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-401000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-401000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:18:17.372487    2980 out.go:177] 
	W0610 07:18:17.382912    2980 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:18:17.382945    2980 out.go:239] * 
	* 
	W0610 07:18:17.385399    2980 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:18:17.395441    2980 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:47: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-401000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:50: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-401000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-401000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 89 (78.382333ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-401000"

                                                
                                                
-- /stdout --
docker_test.go:52: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-401000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 89
docker_test.go:57: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-401000\"\n"*.
docker_test.go:57: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-401000\"\n"*.
docker_test.go:61: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-401000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:61: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-401000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 89 (42.735ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p docker-flags-401000"

                                                
                                                
-- /stdout --
docker_test.go:63: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-401000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 89
docker_test.go:67: expected "out/minikube-darwin-arm64 -p docker-flags-401000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control plane node must be running for this command\n  To start a cluster, run: \"minikube start -p docker-flags-401000\"\n"
panic.go:522: *** TestDockerFlags FAILED at 2023-06-10 07:18:17.532597 -0700 PDT m=+905.841764918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-401000 -n docker-flags-401000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-401000 -n docker-flags-401000: exit status 7 (27.636708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-401000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-401000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-401000
--- FAIL: TestDockerFlags (10.45s)

                                                
                                    
x
+
TestForceSystemdFlag (11.07s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-194000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
E0610 07:18:06.976784    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/client.crt: no such file or directory
docker_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-194000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.884453959s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-194000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-flag-194000 in cluster force-systemd-flag-194000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-194000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:18:01.059162    2957 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:18:01.059311    2957 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:18:01.059314    2957 out.go:309] Setting ErrFile to fd 2...
	I0610 07:18:01.059316    2957 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:18:01.059385    2957 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:18:01.060363    2957 out.go:303] Setting JSON to false
	I0610 07:18:01.075356    2957 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1051,"bootTime":1686405630,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:18:01.075433    2957 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:18:01.081325    2957 out.go:177] * [force-systemd-flag-194000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:18:01.085318    2957 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:18:01.085381    2957 notify.go:220] Checking for updates...
	I0610 07:18:01.088372    2957 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:18:01.093338    2957 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:18:01.096303    2957 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:18:01.097729    2957 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:18:01.101328    2957 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:18:01.104675    2957 config.go:182] Loaded profile config "force-systemd-env-730000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:18:01.104762    2957 config.go:182] Loaded profile config "multinode-214000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:18:01.104805    2957 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:18:01.109128    2957 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 07:18:01.116345    2957 start.go:297] selected driver: qemu2
	I0610 07:18:01.116351    2957 start.go:875] validating driver "qemu2" against <nil>
	I0610 07:18:01.116358    2957 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:18:01.118114    2957 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 07:18:01.121306    2957 out.go:177] * Automatically selected the socket_vmnet network
	I0610 07:18:01.124333    2957 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 07:18:01.124348    2957 cni.go:84] Creating CNI manager for ""
	I0610 07:18:01.124354    2957 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 07:18:01.124358    2957 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 07:18:01.124365    2957 start_flags.go:319] config:
	{Name:force-systemd-flag-194000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-flag-194000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:18:01.124447    2957 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:18:01.131303    2957 out.go:177] * Starting control plane node force-systemd-flag-194000 in cluster force-systemd-flag-194000
	I0610 07:18:01.135283    2957 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:18:01.135304    2957 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 07:18:01.135315    2957 cache.go:57] Caching tarball of preloaded images
	I0610 07:18:01.135373    2957 preload.go:174] Found /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 07:18:01.135378    2957 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 07:18:01.135442    2957 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/force-systemd-flag-194000/config.json ...
	I0610 07:18:01.135453    2957 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/force-systemd-flag-194000/config.json: {Name:mkca3cfe5889385c9f282cd4e575d639cbc6ad70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:18:01.135651    2957 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:18:01.135661    2957 start.go:364] acquiring machines lock for force-systemd-flag-194000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:18:01.135688    2957 start.go:368] acquired machines lock for "force-systemd-flag-194000" in 22.625µs
	I0610 07:18:01.135699    2957 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-flag-194000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:18:01.135723    2957 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:18:01.144271    2957 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 07:18:01.160479    2957 start.go:159] libmachine.API.Create for "force-systemd-flag-194000" (driver="qemu2")
	I0610 07:18:01.160497    2957 client.go:168] LocalClient.Create starting
	I0610 07:18:01.160554    2957 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:18:01.160581    2957 main.go:141] libmachine: Decoding PEM data...
	I0610 07:18:01.160589    2957 main.go:141] libmachine: Parsing certificate...
	I0610 07:18:01.160627    2957 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:18:01.160642    2957 main.go:141] libmachine: Decoding PEM data...
	I0610 07:18:01.160652    2957 main.go:141] libmachine: Parsing certificate...
	I0610 07:18:01.160944    2957 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:18:01.271813    2957 main.go:141] libmachine: Creating SSH key...
	I0610 07:18:01.343893    2957 main.go:141] libmachine: Creating Disk image...
	I0610 07:18:01.343901    2957 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:18:01.344053    2957 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-flag-194000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-flag-194000/disk.qcow2
	I0610 07:18:01.352376    2957 main.go:141] libmachine: STDOUT: 
	I0610 07:18:01.352390    2957 main.go:141] libmachine: STDERR: 
	I0610 07:18:01.352450    2957 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-flag-194000/disk.qcow2 +20000M
	I0610 07:18:01.359515    2957 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:18:01.359535    2957 main.go:141] libmachine: STDERR: 
	I0610 07:18:01.359556    2957 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-flag-194000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-flag-194000/disk.qcow2
	I0610 07:18:01.359564    2957 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:18:01.359601    2957 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-flag-194000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-flag-194000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-flag-194000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:1e:4c:34:25:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-flag-194000/disk.qcow2
	I0610 07:18:01.361170    2957 main.go:141] libmachine: STDOUT: 
	I0610 07:18:01.361182    2957 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:18:01.361200    2957 client.go:171] LocalClient.Create took 200.704125ms
	I0610 07:18:03.363348    2957 start.go:128] duration metric: createHost completed in 2.227661708s
	I0610 07:18:03.363490    2957 start.go:83] releasing machines lock for "force-systemd-flag-194000", held for 2.227820083s
	W0610 07:18:03.363580    2957 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:18:03.374819    2957 out.go:177] * Deleting "force-systemd-flag-194000" in qemu2 ...
	W0610 07:18:03.394652    2957 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:18:03.394685    2957 start.go:702] Will try again in 5 seconds ...
	I0610 07:18:08.395551    2957 start.go:364] acquiring machines lock for force-systemd-flag-194000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:18:09.564211    2957 start.go:368] acquired machines lock for "force-systemd-flag-194000" in 1.168591875s
	I0610 07:18:09.564339    2957 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-194000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-flag-194000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:18:09.564665    2957 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:18:09.574250    2957 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 07:18:09.620996    2957 start.go:159] libmachine.API.Create for "force-systemd-flag-194000" (driver="qemu2")
	I0610 07:18:09.621054    2957 client.go:168] LocalClient.Create starting
	I0610 07:18:09.621245    2957 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:18:09.621311    2957 main.go:141] libmachine: Decoding PEM data...
	I0610 07:18:09.621345    2957 main.go:141] libmachine: Parsing certificate...
	I0610 07:18:09.621461    2957 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:18:09.621495    2957 main.go:141] libmachine: Decoding PEM data...
	I0610 07:18:09.621513    2957 main.go:141] libmachine: Parsing certificate...
	I0610 07:18:09.622228    2957 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:18:09.835925    2957 main.go:141] libmachine: Creating SSH key...
	I0610 07:18:09.860598    2957 main.go:141] libmachine: Creating Disk image...
	I0610 07:18:09.860603    2957 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:18:09.860774    2957 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-flag-194000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-flag-194000/disk.qcow2
	I0610 07:18:09.869283    2957 main.go:141] libmachine: STDOUT: 
	I0610 07:18:09.869300    2957 main.go:141] libmachine: STDERR: 
	I0610 07:18:09.869361    2957 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-flag-194000/disk.qcow2 +20000M
	I0610 07:18:09.876515    2957 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:18:09.876532    2957 main.go:141] libmachine: STDERR: 
	I0610 07:18:09.876549    2957 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-flag-194000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-flag-194000/disk.qcow2
	I0610 07:18:09.876555    2957 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:18:09.876604    2957 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-flag-194000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-flag-194000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-flag-194000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:8a:08:c7:a8:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-flag-194000/disk.qcow2
	I0610 07:18:09.878116    2957 main.go:141] libmachine: STDOUT: 
	I0610 07:18:09.878131    2957 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:18:09.878152    2957 client.go:171] LocalClient.Create took 257.089542ms
	I0610 07:18:11.880268    2957 start.go:128] duration metric: createHost completed in 2.315657s
	I0610 07:18:11.880301    2957 start.go:83] releasing machines lock for "force-systemd-flag-194000", held for 2.316126125s
	W0610 07:18:11.880477    2957 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-194000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-194000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:18:11.888917    2957 out.go:177] 
	W0610 07:18:11.891885    2957 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:18:11.891897    2957 out.go:239] * 
	* 
	W0610 07:18:11.893228    2957 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:18:11.904888    2957 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:87: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-194000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:104: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-194000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-194000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (59.513958ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-flag-194000"

                                                
                                                
-- /stdout --
docker_test.go:106: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-194000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:100: *** TestForceSystemdFlag FAILED at 2023-06-10 07:18:11.978727 -0700 PDT m=+900.287719543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-194000 -n force-systemd-flag-194000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-194000 -n force-systemd-flag-194000: exit status 7 (31.353834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-194000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-194000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-194000
--- FAIL: TestForceSystemdFlag (11.07s)

                                                
                                    
x
+
TestForceSystemdEnv (9.83s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-730000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-730000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.618417833s)

                                                
                                                
-- stdout --
	* [force-systemd-env-730000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node force-systemd-env-730000 in cluster force-systemd-env-730000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-730000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:17:57.399928    2938 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:17:57.400071    2938 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:17:57.400074    2938 out.go:309] Setting ErrFile to fd 2...
	I0610 07:17:57.400076    2938 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:17:57.400142    2938 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:17:57.401212    2938 out.go:303] Setting JSON to false
	I0610 07:17:57.416318    2938 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1047,"bootTime":1686405630,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:17:57.416374    2938 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:17:57.419991    2938 out.go:177] * [force-systemd-env-730000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:17:57.427926    2938 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:17:57.431886    2938 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:17:57.427983    2938 notify.go:220] Checking for updates...
	I0610 07:17:57.434937    2938 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:17:57.437918    2938 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:17:57.440910    2938 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:17:57.443969    2938 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0610 07:17:57.447278    2938 config.go:182] Loaded profile config "multinode-214000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:17:57.447317    2938 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:17:57.451877    2938 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 07:17:57.458830    2938 start.go:297] selected driver: qemu2
	I0610 07:17:57.458835    2938 start.go:875] validating driver "qemu2" against <nil>
	I0610 07:17:57.458843    2938 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:17:57.460693    2938 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 07:17:57.463903    2938 out.go:177] * Automatically selected the socket_vmnet network
	I0610 07:17:57.466997    2938 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 07:17:57.467015    2938 cni.go:84] Creating CNI manager for ""
	I0610 07:17:57.467024    2938 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 07:17:57.467028    2938 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 07:17:57.467038    2938 start_flags.go:319] config:
	{Name:force-systemd-env-730000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-env-730000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:17:57.467165    2938 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:17:57.474850    2938 out.go:177] * Starting control plane node force-systemd-env-730000 in cluster force-systemd-env-730000
	I0610 07:17:57.478803    2938 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:17:57.478826    2938 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 07:17:57.478841    2938 cache.go:57] Caching tarball of preloaded images
	I0610 07:17:57.478900    2938 preload.go:174] Found /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 07:17:57.478905    2938 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 07:17:57.478966    2938 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/force-systemd-env-730000/config.json ...
	I0610 07:17:57.478978    2938 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/force-systemd-env-730000/config.json: {Name:mk522bfafb8020dd3a5042cad96e7ac4cae97ad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:17:57.479179    2938 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:17:57.479190    2938 start.go:364] acquiring machines lock for force-systemd-env-730000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:17:57.479219    2938 start.go:368] acquired machines lock for "force-systemd-env-730000" in 24.417µs
	I0610 07:17:57.479230    2938 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-730000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-env-730000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:17:57.479256    2938 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:17:57.486869    2938 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 07:17:57.503695    2938 start.go:159] libmachine.API.Create for "force-systemd-env-730000" (driver="qemu2")
	I0610 07:17:57.503723    2938 client.go:168] LocalClient.Create starting
	I0610 07:17:57.503777    2938 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:17:57.503798    2938 main.go:141] libmachine: Decoding PEM data...
	I0610 07:17:57.503807    2938 main.go:141] libmachine: Parsing certificate...
	I0610 07:17:57.503854    2938 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:17:57.503868    2938 main.go:141] libmachine: Decoding PEM data...
	I0610 07:17:57.503875    2938 main.go:141] libmachine: Parsing certificate...
	I0610 07:17:57.504208    2938 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:17:57.619621    2938 main.go:141] libmachine: Creating SSH key...
	I0610 07:17:57.675433    2938 main.go:141] libmachine: Creating Disk image...
	I0610 07:17:57.675440    2938 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:17:57.675583    2938 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-env-730000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-env-730000/disk.qcow2
	I0610 07:17:57.684068    2938 main.go:141] libmachine: STDOUT: 
	I0610 07:17:57.684082    2938 main.go:141] libmachine: STDERR: 
	I0610 07:17:57.684132    2938 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-env-730000/disk.qcow2 +20000M
	I0610 07:17:57.691601    2938 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:17:57.691625    2938 main.go:141] libmachine: STDERR: 
	I0610 07:17:57.691648    2938 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-env-730000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-env-730000/disk.qcow2
	I0610 07:17:57.691654    2938 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:17:57.691699    2938 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-env-730000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-env-730000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-env-730000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:cd:20:da:d8:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-env-730000/disk.qcow2
	I0610 07:17:57.693371    2938 main.go:141] libmachine: STDOUT: 
	I0610 07:17:57.693384    2938 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:17:57.693405    2938 client.go:171] LocalClient.Create took 189.678667ms
	I0610 07:17:59.695501    2938 start.go:128] duration metric: createHost completed in 2.216293166s
	I0610 07:17:59.695589    2938 start.go:83] releasing machines lock for "force-systemd-env-730000", held for 2.216429375s
	W0610 07:17:59.695660    2938 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:17:59.704969    2938 out.go:177] * Deleting "force-systemd-env-730000" in qemu2 ...
	W0610 07:17:59.724632    2938 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:17:59.724664    2938 start.go:702] Will try again in 5 seconds ...
	I0610 07:18:04.726730    2938 start.go:364] acquiring machines lock for force-systemd-env-730000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:18:04.727267    2938 start.go:368] acquired machines lock for "force-systemd-env-730000" in 414.583µs
	I0610 07:18:04.727543    2938 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-730000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.27.2 ClusterName:force-systemd-env-730000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:18:04.727878    2938 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:18:04.733389    2938 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0610 07:18:04.780676    2938 start.go:159] libmachine.API.Create for "force-systemd-env-730000" (driver="qemu2")
	I0610 07:18:04.780724    2938 client.go:168] LocalClient.Create starting
	I0610 07:18:04.780859    2938 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:18:04.780912    2938 main.go:141] libmachine: Decoding PEM data...
	I0610 07:18:04.780938    2938 main.go:141] libmachine: Parsing certificate...
	I0610 07:18:04.781036    2938 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:18:04.781069    2938 main.go:141] libmachine: Decoding PEM data...
	I0610 07:18:04.781087    2938 main.go:141] libmachine: Parsing certificate...
	I0610 07:18:04.781665    2938 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:18:04.903687    2938 main.go:141] libmachine: Creating SSH key...
	I0610 07:18:04.933653    2938 main.go:141] libmachine: Creating Disk image...
	I0610 07:18:04.933658    2938 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:18:04.933806    2938 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-env-730000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-env-730000/disk.qcow2
	I0610 07:18:04.942217    2938 main.go:141] libmachine: STDOUT: 
	I0610 07:18:04.942232    2938 main.go:141] libmachine: STDERR: 
	I0610 07:18:04.942289    2938 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-env-730000/disk.qcow2 +20000M
	I0610 07:18:04.949374    2938 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:18:04.949387    2938 main.go:141] libmachine: STDERR: 
	I0610 07:18:04.949401    2938 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-env-730000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-env-730000/disk.qcow2
	I0610 07:18:04.949421    2938 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:18:04.949451    2938 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-env-730000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-env-730000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-env-730000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:71:43:7e:a0:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/force-systemd-env-730000/disk.qcow2
	I0610 07:18:04.950985    2938 main.go:141] libmachine: STDOUT: 
	I0610 07:18:04.950997    2938 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:18:04.951007    2938 client.go:171] LocalClient.Create took 170.283167ms
	I0610 07:18:06.953095    2938 start.go:128] duration metric: createHost completed in 2.225263417s
	I0610 07:18:06.953172    2938 start.go:83] releasing machines lock for "force-systemd-env-730000", held for 2.225946625s
	W0610 07:18:06.953735    2938 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-730000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-730000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:18:06.962344    2938 out.go:177] 
	W0610 07:18:06.965503    2938 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:18:06.965545    2938 out.go:239] * 
	* 
	W0610 07:18:06.968107    2938 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:18:06.976286    2938 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:151: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-730000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:104: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-730000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:104: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-730000 ssh "docker info --format {{.CgroupDriver}}": exit status 89 (80.20425ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p force-systemd-env-730000"

                                                
                                                
-- /stdout --
docker_test.go:106: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-730000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 89
docker_test.go:160: *** TestForceSystemdEnv FAILED at 2023-06-10 07:18:07.07267 -0700 PDT m=+895.381506459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-730000 -n force-systemd-env-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-730000 -n force-systemd-env-730000: exit status 7 (34.742958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-730000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-730000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-730000
--- FAIL: TestForceSystemdEnv (9.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (43.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-922000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1633: (dbg) Run:  kubectl --context functional-922000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58d66798bb-gr9nx" [7f381893-b5d6-4dde-b2f2-7a091639bf9f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-58d66798bb-gr9nx" [7f381893-b5d6-4dde-b2f2-7a091639bf9f] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.013603208s
functional_test.go:1647: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 service hello-node-connect --url
functional_test.go:1653: found endpoint for hello-node-connect: http://192.168.105.4:32141
functional_test.go:1659: error fetching http://192.168.105.4:32141: Get "http://192.168.105.4:32141": dial tcp 192.168.105.4:32141: connect: connection refused
functional_test.go:1659: error fetching http://192.168.105.4:32141: Get "http://192.168.105.4:32141": dial tcp 192.168.105.4:32141: connect: connection refused
functional_test.go:1659: error fetching http://192.168.105.4:32141: Get "http://192.168.105.4:32141": dial tcp 192.168.105.4:32141: connect: connection refused
functional_test.go:1659: error fetching http://192.168.105.4:32141: Get "http://192.168.105.4:32141": dial tcp 192.168.105.4:32141: connect: connection refused
functional_test.go:1659: error fetching http://192.168.105.4:32141: Get "http://192.168.105.4:32141": dial tcp 192.168.105.4:32141: connect: connection refused
functional_test.go:1659: error fetching http://192.168.105.4:32141: Get "http://192.168.105.4:32141": dial tcp 192.168.105.4:32141: connect: connection refused
functional_test.go:1659: error fetching http://192.168.105.4:32141: Get "http://192.168.105.4:32141": dial tcp 192.168.105.4:32141: connect: connection refused
functional_test.go:1679: failed to fetch http://192.168.105.4:32141: Get "http://192.168.105.4:32141": dial tcp 192.168.105.4:32141: connect: connection refused
functional_test.go:1596: service test failed - dumping debug information
functional_test.go:1597: -----------------------service failure post-mortem--------------------------------
functional_test.go:1600: (dbg) Run:  kubectl --context functional-922000 describe po hello-node-connect
functional_test.go:1604: hello-node pod describe:
Name:             hello-node-connect-58d66798bb-gr9nx
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-922000/192.168.105.4
Start Time:       Sat, 10 Jun 2023 07:08:25 -0700
Labels:           app=hello-node-connect
pod-template-hash=58d66798bb
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-58d66798bb
Containers:
echoserver-arm:
Container ID:   docker://981ece83357f9ab2a457ed44e59a65c4f413e82ae86f1e0f79fc5fec22fe86f9
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       CrashLoopBackOff
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Sat, 10 Jun 2023 07:08:43 -0700
Finished:     Sat, 10 Jun 2023 07:08:43 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4zw7q (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
kube-api-access-4zw7q:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  42s                default-scheduler  Successfully assigned default/hello-node-connect-58d66798bb-gr9nx to functional-922000
Normal   Pulling    42s                kubelet            Pulling image "registry.k8s.io/echoserver-arm:1.8"
Normal   Pulled     37s                kubelet            Successfully pulled image "registry.k8s.io/echoserver-arm:1.8" in 4.592685341s (4.592689758s including waiting)
Normal   Created    24s (x3 over 37s)  kubelet            Created container echoserver-arm
Normal   Started    24s (x3 over 37s)  kubelet            Started container echoserver-arm
Normal   Pulled     24s (x2 over 36s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Warning  BackOff    8s (x4 over 35s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-58d66798bb-gr9nx_default(7f381893-b5d6-4dde-b2f2-7a091639bf9f)

                                                
                                                
functional_test.go:1606: (dbg) Run:  kubectl --context functional-922000 logs -l app=hello-node-connect
functional_test.go:1610: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1612: (dbg) Run:  kubectl --context functional-922000 describe svc hello-node-connect
functional_test.go:1616: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.103.105.110
IPs:                      10.103.105.110
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32141/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-922000 -n functional-922000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                         Args                                                         |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| service | functional-922000                                                                                                    | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:08 PDT | 10 Jun 23 07:08 PDT |
	|         | service hello-node --url                                                                                             |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                                                     |                   |         |         |                     |                     |
	| service | functional-922000 service                                                                                            | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:08 PDT | 10 Jun 23 07:08 PDT |
	|         | hello-node --url                                                                                                     |                   |         |         |                     |                     |
	| mount   | -p functional-922000                                                                                                 | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:08 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port287119867/001:/mount-9p       |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-922000 ssh findmnt                                                                                        | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:08 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-922000 ssh findmnt                                                                                        | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:08 PDT | 10 Jun 23 07:08 PDT |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-922000 ssh -- ls                                                                                          | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:08 PDT | 10 Jun 23 07:08 PDT |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-922000 ssh cat                                                                                            | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:08 PDT | 10 Jun 23 07:08 PDT |
	|         | /mount-9p/test-1686406125887724000                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-922000 ssh stat                                                                                           | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:08 PDT | 10 Jun 23 07:08 PDT |
	|         | /mount-9p/created-by-test                                                                                            |                   |         |         |                     |                     |
	| ssh     | functional-922000 ssh stat                                                                                           | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:08 PDT | 10 Jun 23 07:08 PDT |
	|         | /mount-9p/created-by-pod                                                                                             |                   |         |         |                     |                     |
	| ssh     | functional-922000 ssh sudo                                                                                           | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:08 PDT | 10 Jun 23 07:08 PDT |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-922000 ssh findmnt                                                                                        | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:08 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-922000                                                                                                 | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:08 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3024479691/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                                                                  |                   |         |         |                     |                     |
	| ssh     | functional-922000 ssh findmnt                                                                                        | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:08 PDT | 10 Jun 23 07:08 PDT |
	|         | -T /mount-9p | grep 9p                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-922000 ssh -- ls                                                                                          | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:08 PDT | 10 Jun 23 07:08 PDT |
	|         | -la /mount-9p                                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-922000 ssh sudo                                                                                           | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:08 PDT |                     |
	|         | umount -f /mount-9p                                                                                                  |                   |         |         |                     |                     |
	| mount   | -p functional-922000                                                                                                 | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:08 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup761318276/001:/mount1    |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-922000                                                                                                 | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:08 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup761318276/001:/mount3    |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| mount   | -p functional-922000                                                                                                 | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:08 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup761318276/001:/mount2    |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-922000 ssh findmnt                                                                                        | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:08 PDT |                     |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-922000 ssh findmnt                                                                                        | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:08 PDT |                     |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-922000 ssh findmnt                                                                                        | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:08 PDT |                     |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-922000 ssh findmnt                                                                                        | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:08 PDT |                     |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-922000 ssh findmnt                                                                                        | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:08 PDT |                     |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-922000 ssh findmnt                                                                                        | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT |                     |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-922000 ssh findmnt                                                                                        | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT |                     |
	|         | -T /mount1                                                                                                           |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 07:07:35
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 07:07:35.864076    1707 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:07:35.864201    1707 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:07:35.864203    1707 out.go:309] Setting ErrFile to fd 2...
	I0610 07:07:35.864204    1707 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:07:35.864274    1707 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:07:35.865383    1707 out.go:303] Setting JSON to false
	I0610 07:07:35.881163    1707 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":425,"bootTime":1686405630,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:07:35.881215    1707 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:07:35.885864    1707 out.go:177] * [functional-922000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:07:35.892764    1707 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:07:35.892835    1707 notify.go:220] Checking for updates...
	I0610 07:07:35.896772    1707 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:07:35.899737    1707 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:07:35.902765    1707 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:07:35.905817    1707 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:07:35.908731    1707 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:07:35.911922    1707 config.go:182] Loaded profile config "functional-922000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:07:35.911960    1707 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:07:35.916775    1707 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 07:07:35.923744    1707 start.go:297] selected driver: qemu2
	I0610 07:07:35.923746    1707 start.go:875] validating driver "qemu2" against &{Name:functional-922000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:functional-922000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:07:35.923797    1707 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:07:35.925688    1707 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 07:07:35.925707    1707 cni.go:84] Creating CNI manager for ""
	I0610 07:07:35.925712    1707 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 07:07:35.925716    1707 start_flags.go:319] config:
	{Name:functional-922000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:functional-922000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:07:35.925815    1707 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:07:35.934729    1707 out.go:177] * Starting control plane node functional-922000 in cluster functional-922000
	I0610 07:07:35.938751    1707 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:07:35.938776    1707 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 07:07:35.938788    1707 cache.go:57] Caching tarball of preloaded images
	I0610 07:07:35.938843    1707 preload.go:174] Found /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 07:07:35.938848    1707 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 07:07:35.939144    1707 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/functional-922000/config.json ...
	I0610 07:07:35.939528    1707 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:07:35.939539    1707 start.go:364] acquiring machines lock for functional-922000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:07:35.939567    1707 start.go:368] acquired machines lock for "functional-922000" in 24.458µs
	I0610 07:07:35.939576    1707 start.go:96] Skipping create...Using existing machine configuration
	I0610 07:07:35.939579    1707 fix.go:55] fixHost starting: 
	I0610 07:07:35.940588    1707 fix.go:103] recreateIfNeeded on functional-922000: state=Running err=<nil>
	W0610 07:07:35.940596    1707 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 07:07:35.948750    1707 out.go:177] * Updating the running qemu2 "functional-922000" VM ...
	I0610 07:07:35.952746    1707 machine.go:88] provisioning docker machine ...
	I0610 07:07:35.952753    1707 buildroot.go:166] provisioning hostname "functional-922000"
	I0610 07:07:35.952807    1707 main.go:141] libmachine: Using SSH client type: native
	I0610 07:07:35.953094    1707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b906d0] 0x104b93130 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0610 07:07:35.953100    1707 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-922000 && echo "functional-922000" | sudo tee /etc/hostname
	I0610 07:07:36.014130    1707 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-922000
	
	I0610 07:07:36.014156    1707 main.go:141] libmachine: Using SSH client type: native
	I0610 07:07:36.014382    1707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b906d0] 0x104b93130 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0610 07:07:36.014388    1707 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-922000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-922000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-922000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 07:07:36.069735    1707 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 07:07:36.069743    1707 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15074-894/.minikube CaCertPath:/Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15074-894/.minikube}
	I0610 07:07:36.069749    1707 buildroot.go:174] setting up certificates
	I0610 07:07:36.069753    1707 provision.go:83] configureAuth start
	I0610 07:07:36.069756    1707 provision.go:138] copyHostCerts
	I0610 07:07:36.069821    1707 exec_runner.go:144] found /Users/jenkins/minikube-integration/15074-894/.minikube/ca.pem, removing ...
	I0610 07:07:36.069824    1707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15074-894/.minikube/ca.pem
	I0610 07:07:36.069924    1707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15074-894/.minikube/ca.pem (1078 bytes)
	I0610 07:07:36.070075    1707 exec_runner.go:144] found /Users/jenkins/minikube-integration/15074-894/.minikube/cert.pem, removing ...
	I0610 07:07:36.070076    1707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15074-894/.minikube/cert.pem
	I0610 07:07:36.070114    1707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15074-894/.minikube/cert.pem (1123 bytes)
	I0610 07:07:36.070220    1707 exec_runner.go:144] found /Users/jenkins/minikube-integration/15074-894/.minikube/key.pem, removing ...
	I0610 07:07:36.070221    1707 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15074-894/.minikube/key.pem
	I0610 07:07:36.070256    1707 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15074-894/.minikube/key.pem (1679 bytes)
	I0610 07:07:36.070384    1707 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15074-894/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca-key.pem org=jenkins.functional-922000 san=[192.168.105.4 192.168.105.4 localhost 127.0.0.1 minikube functional-922000]
	I0610 07:07:36.158952    1707 provision.go:172] copyRemoteCerts
	I0610 07:07:36.158993    1707 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 07:07:36.158999    1707 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/functional-922000/id_rsa Username:docker}
	I0610 07:07:36.189439    1707 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 07:07:36.196346    1707 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0610 07:07:36.203954    1707 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0610 07:07:36.211591    1707 provision.go:86] duration metric: configureAuth took 141.837875ms
	I0610 07:07:36.211596    1707 buildroot.go:189] setting minikube options for container-runtime
	I0610 07:07:36.211700    1707 config.go:182] Loaded profile config "functional-922000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:07:36.211747    1707 main.go:141] libmachine: Using SSH client type: native
	I0610 07:07:36.211968    1707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b906d0] 0x104b93130 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0610 07:07:36.211971    1707 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 07:07:36.266151    1707 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 07:07:36.266155    1707 buildroot.go:70] root file system type: tmpfs
	I0610 07:07:36.266204    1707 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 07:07:36.266244    1707 main.go:141] libmachine: Using SSH client type: native
	I0610 07:07:36.266471    1707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b906d0] 0x104b93130 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0610 07:07:36.266504    1707 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 07:07:36.322182    1707 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 07:07:36.322231    1707 main.go:141] libmachine: Using SSH client type: native
	I0610 07:07:36.322468    1707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b906d0] 0x104b93130 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0610 07:07:36.322475    1707 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 07:07:36.380367    1707 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 07:07:36.380374    1707 machine.go:91] provisioned docker machine in 427.639541ms
	I0610 07:07:36.380377    1707 start.go:300] post-start starting for "functional-922000" (driver="qemu2")
	I0610 07:07:36.380380    1707 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 07:07:36.380431    1707 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 07:07:36.380438    1707 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/functional-922000/id_rsa Username:docker}
	I0610 07:07:36.411902    1707 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 07:07:36.413308    1707 info.go:137] Remote host: Buildroot 2021.02.12
	I0610 07:07:36.413312    1707 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15074-894/.minikube/addons for local assets ...
	I0610 07:07:36.413364    1707 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15074-894/.minikube/files for local assets ...
	I0610 07:07:36.413467    1707 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15074-894/.minikube/files/etc/ssl/certs/13362.pem -> 13362.pem in /etc/ssl/certs
	I0610 07:07:36.413579    1707 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15074-894/.minikube/files/etc/test/nested/copy/1336/hosts -> hosts in /etc/test/nested/copy/1336
	I0610 07:07:36.413604    1707 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1336
	I0610 07:07:36.416370    1707 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/files/etc/ssl/certs/13362.pem --> /etc/ssl/certs/13362.pem (1708 bytes)
	I0610 07:07:36.423739    1707 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/files/etc/test/nested/copy/1336/hosts --> /etc/test/nested/copy/1336/hosts (40 bytes)
	I0610 07:07:36.431271    1707 start.go:303] post-start completed in 50.890666ms
	I0610 07:07:36.431275    1707 fix.go:57] fixHost completed within 491.7135ms
	I0610 07:07:36.431309    1707 main.go:141] libmachine: Using SSH client type: native
	I0610 07:07:36.431539    1707 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104b906d0] 0x104b93130 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0610 07:07:36.431542    1707 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 07:07:36.485754    1707 main.go:141] libmachine: SSH cmd err, output: <nil>: 1686406056.582726439
	
	I0610 07:07:36.485758    1707 fix.go:207] guest clock: 1686406056.582726439
	I0610 07:07:36.485761    1707 fix.go:220] Guest: 2023-06-10 07:07:36.582726439 -0700 PDT Remote: 2023-06-10 07:07:36.431276 -0700 PDT m=+0.586023959 (delta=151.450439ms)
	I0610 07:07:36.485770    1707 fix.go:191] guest clock delta is within tolerance: 151.450439ms
	I0610 07:07:36.485772    1707 start.go:83] releasing machines lock for "functional-922000", held for 546.220958ms
	I0610 07:07:36.486020    1707 ssh_runner.go:195] Run: cat /version.json
	I0610 07:07:36.486025    1707 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/functional-922000/id_rsa Username:docker}
	I0610 07:07:36.486052    1707 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 07:07:36.486069    1707 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/functional-922000/id_rsa Username:docker}
	I0610 07:07:36.556834    1707 ssh_runner.go:195] Run: systemctl --version
	I0610 07:07:36.559034    1707 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 07:07:36.560785    1707 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 07:07:36.560808    1707 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 07:07:36.564078    1707 cni.go:258] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0610 07:07:36.564083    1707 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:07:36.564145    1707 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 07:07:36.572963    1707 docker.go:633] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-922000
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0610 07:07:36.572968    1707 docker.go:563] Images already preloaded, skipping extraction
	I0610 07:07:36.572972    1707 start.go:481] detecting cgroup driver to use...
	I0610 07:07:36.573030    1707 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 07:07:36.578725    1707 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 07:07:36.582242    1707 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 07:07:36.585106    1707 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 07:07:36.585127    1707 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 07:07:36.588388    1707 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 07:07:36.591842    1707 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 07:07:36.595393    1707 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 07:07:36.598438    1707 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 07:07:36.601365    1707 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 07:07:36.604486    1707 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 07:07:36.608035    1707 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 07:07:36.611301    1707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 07:07:36.692981    1707 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 07:07:36.699973    1707 start.go:481] detecting cgroup driver to use...
	I0610 07:07:36.700045    1707 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 07:07:36.709217    1707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 07:07:36.715631    1707 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 07:07:36.721977    1707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 07:07:36.726629    1707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 07:07:36.731485    1707 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 07:07:36.736949    1707 ssh_runner.go:195] Run: which cri-dockerd
	I0610 07:07:36.738394    1707 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 07:07:36.741179    1707 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 07:07:36.746156    1707 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 07:07:36.836065    1707 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 07:07:36.914280    1707 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 07:07:36.914288    1707 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0610 07:07:36.919919    1707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 07:07:37.017525    1707 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 07:07:48.400904    1707 ssh_runner.go:235] Completed: sudo systemctl restart docker: (11.383746083s)
	I0610 07:07:48.400984    1707 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 07:07:48.471159    1707 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 07:07:48.539651    1707 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 07:07:48.610402    1707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 07:07:48.673678    1707 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 07:07:48.680652    1707 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 07:07:48.754859    1707 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0610 07:07:48.780013    1707 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 07:07:48.780102    1707 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 07:07:48.782074    1707 start.go:549] Will wait 60s for crictl version
	I0610 07:07:48.782112    1707 ssh_runner.go:195] Run: which crictl
	I0610 07:07:48.783430    1707 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 07:07:48.795688    1707 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0610 07:07:48.795740    1707 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 07:07:48.803313    1707 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 07:07:48.814827    1707 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 24.0.2 ...
	I0610 07:07:48.814980    1707 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0610 07:07:48.820708    1707 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0610 07:07:48.824756    1707 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:07:48.824813    1707 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 07:07:48.835260    1707 docker.go:633] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-922000
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0610 07:07:48.835266    1707 docker.go:563] Images already preloaded, skipping extraction
	I0610 07:07:48.835314    1707 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 07:07:48.841160    1707 docker.go:633] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-922000
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0610 07:07:48.841174    1707 cache_images.go:84] Images are preloaded, skipping loading
	I0610 07:07:48.841219    1707 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 07:07:48.848953    1707 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0610 07:07:48.848968    1707 cni.go:84] Creating CNI manager for ""
	I0610 07:07:48.848972    1707 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 07:07:48.848977    1707 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0610 07:07:48.848985    1707 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-922000 NodeName:functional-922000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 07:07:48.849069    1707 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-922000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 07:07:48.849104    1707 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=functional-922000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:functional-922000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:}
	I0610 07:07:48.849166    1707 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0610 07:07:48.852468    1707 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 07:07:48.852491    1707 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 07:07:48.855618    1707 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0610 07:07:48.860528    1707 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 07:07:48.865973    1707 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1953 bytes)
	I0610 07:07:48.871141    1707 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I0610 07:07:48.872641    1707 certs.go:56] Setting up /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/functional-922000 for IP: 192.168.105.4
	I0610 07:07:48.872647    1707 certs.go:190] acquiring lock for shared ca certs: {Name:mk2bb46910d2e2fc8cdcab49d7502062bd19dc79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:07:48.872786    1707 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15074-894/.minikube/ca.key
	I0610 07:07:48.872822    1707 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15074-894/.minikube/proxy-client-ca.key
	I0610 07:07:48.872884    1707 certs.go:315] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/functional-922000/client.key
	I0610 07:07:48.872927    1707 certs.go:315] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/functional-922000/apiserver.key.942c473b
	I0610 07:07:48.872962    1707 certs.go:315] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/functional-922000/proxy-client.key
	I0610 07:07:48.873110    1707 certs.go:437] found cert: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/Users/jenkins/minikube-integration/15074-894/.minikube/certs/1336.pem (1338 bytes)
	W0610 07:07:48.873135    1707 certs.go:433] ignoring /Users/jenkins/minikube-integration/15074-894/.minikube/certs/Users/jenkins/minikube-integration/15074-894/.minikube/certs/1336_empty.pem, impossibly tiny 0 bytes
	I0610 07:07:48.873141    1707 certs.go:437] found cert: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca-key.pem (1675 bytes)
	I0610 07:07:48.873163    1707 certs.go:437] found cert: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem (1078 bytes)
	I0610 07:07:48.873182    1707 certs.go:437] found cert: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem (1123 bytes)
	I0610 07:07:48.873201    1707 certs.go:437] found cert: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/Users/jenkins/minikube-integration/15074-894/.minikube/certs/key.pem (1679 bytes)
	I0610 07:07:48.873240    1707 certs.go:437] found cert: /Users/jenkins/minikube-integration/15074-894/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15074-894/.minikube/files/etc/ssl/certs/13362.pem (1708 bytes)
	I0610 07:07:48.873586    1707 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/functional-922000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0610 07:07:48.880560    1707 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/functional-922000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0610 07:07:48.888080    1707 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/functional-922000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 07:07:48.895620    1707 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/functional-922000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 07:07:48.903204    1707 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 07:07:48.910715    1707 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0610 07:07:48.917672    1707 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 07:07:48.924526    1707 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 07:07:48.931580    1707 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/files/etc/ssl/certs/13362.pem --> /usr/share/ca-certificates/13362.pem (1708 bytes)
	I0610 07:07:48.939034    1707 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 07:07:48.946146    1707 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/certs/1336.pem --> /usr/share/ca-certificates/1336.pem (1338 bytes)
	I0610 07:07:48.952783    1707 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 07:07:48.958213    1707 ssh_runner.go:195] Run: openssl version
	I0610 07:07:48.959986    1707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13362.pem && ln -fs /usr/share/ca-certificates/13362.pem /etc/ssl/certs/13362.pem"
	I0610 07:07:48.963497    1707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13362.pem
	I0610 07:07:48.965119    1707 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 10 14:05 /usr/share/ca-certificates/13362.pem
	I0610 07:07:48.965135    1707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13362.pem
	I0610 07:07:48.967032    1707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13362.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 07:07:48.969834    1707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 07:07:48.972957    1707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 07:07:48.974618    1707 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 10 14:05 /usr/share/ca-certificates/minikubeCA.pem
	I0610 07:07:48.974641    1707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 07:07:48.976568    1707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 07:07:48.979936    1707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1336.pem && ln -fs /usr/share/ca-certificates/1336.pem /etc/ssl/certs/1336.pem"
	I0610 07:07:48.983292    1707 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1336.pem
	I0610 07:07:48.984794    1707 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 10 14:05 /usr/share/ca-certificates/1336.pem
	I0610 07:07:48.984808    1707 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1336.pem
	I0610 07:07:48.986817    1707 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1336.pem /etc/ssl/certs/51391683.0"
	I0610 07:07:48.989636    1707 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0610 07:07:48.990959    1707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0610 07:07:48.992720    1707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0610 07:07:48.994659    1707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0610 07:07:48.996458    1707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0610 07:07:48.998348    1707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0610 07:07:49.000186    1707 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0610 07:07:49.002133    1707 kubeadm.go:404] StartCluster: {Name:functional-922000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.27.2 ClusterName:functional-922000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:07:49.002207    1707 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 07:07:49.008452    1707 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 07:07:49.012079    1707 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0610 07:07:49.012085    1707 kubeadm.go:636] restartCluster start
	I0610 07:07:49.012108    1707 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0610 07:07:49.015321    1707 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0610 07:07:49.015824    1707 kubeconfig.go:92] found "functional-922000" server: "https://192.168.105.4:8441"
	I0610 07:07:49.016925    1707 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0610 07:07:49.020382    1707 kubeadm.go:602] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0610 07:07:49.020386    1707 kubeadm.go:1123] stopping kube-system containers ...
	I0610 07:07:49.020422    1707 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 07:07:49.027483    1707 docker.go:459] Stopping containers: [e8b4e63066a8 74ce844d9a43 fad9fe01cae0 434eda920b8b 5b4a1dd90aab e0a6e45c334f 1eee634b3e32 b6506990de02 6f5e013f71e7 0ac09b58cd01 94caa5af9f56 e5b53bc52805 1851aae1c841 d7485d971008 09dd8eafbd9f ba9e44dd61b1 fb7ffd9b7a95 af26acc952c2 9f0188e6ad96 1b450c7cdca4 0b2902e6b79e b8758ff26ed2 c802b41065bc 3533737ddfd7 68c192ca4215 c77a748a8393]
	I0610 07:07:49.027531    1707 ssh_runner.go:195] Run: docker stop e8b4e63066a8 74ce844d9a43 fad9fe01cae0 434eda920b8b 5b4a1dd90aab e0a6e45c334f 1eee634b3e32 b6506990de02 6f5e013f71e7 0ac09b58cd01 94caa5af9f56 e5b53bc52805 1851aae1c841 d7485d971008 09dd8eafbd9f ba9e44dd61b1 fb7ffd9b7a95 af26acc952c2 9f0188e6ad96 1b450c7cdca4 0b2902e6b79e b8758ff26ed2 c802b41065bc 3533737ddfd7 68c192ca4215 c77a748a8393
	I0610 07:07:49.033849    1707 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0610 07:07:49.125926    1707 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 07:07:49.130154    1707 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Jun 10 14:06 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Jun 10 14:06 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jun 10 14:06 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Jun 10 14:06 /etc/kubernetes/scheduler.conf
	
	I0610 07:07:49.130190    1707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0610 07:07:49.133560    1707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0610 07:07:49.136838    1707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0610 07:07:49.140349    1707 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0610 07:07:49.140385    1707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0610 07:07:49.143918    1707 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0610 07:07:49.147219    1707 kubeadm.go:166] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0610 07:07:49.147242    1707 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0610 07:07:49.150101    1707 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 07:07:49.153040    1707 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0610 07:07:49.153044    1707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 07:07:49.174772    1707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 07:07:49.510947    1707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0610 07:07:49.613433    1707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 07:07:49.659723    1707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0610 07:07:49.684441    1707 api_server.go:52] waiting for apiserver process to appear ...
	I0610 07:07:49.684507    1707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 07:07:50.199650    1707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 07:07:50.699613    1707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 07:07:50.703823    1707 api_server.go:72] duration metric: took 1.01941775s to wait for apiserver process to appear ...
	I0610 07:07:50.703833    1707 api_server.go:88] waiting for apiserver healthz status ...
	I0610 07:07:50.703840    1707 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0610 07:07:52.549409    1707 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0610 07:07:52.549417    1707 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0610 07:07:53.051485    1707 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0610 07:07:53.056291    1707 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0610 07:07:53.056300    1707 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0610 07:07:53.551439    1707 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0610 07:07:53.555034    1707 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0610 07:07:53.555043    1707 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0610 07:07:54.051553    1707 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0610 07:07:54.065302    1707 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0610 07:07:54.081555    1707 api_server.go:141] control plane version: v1.27.2
	I0610 07:07:54.081576    1707 api_server.go:131] duration metric: took 3.377852s to wait for apiserver health ...
	I0610 07:07:54.081589    1707 cni.go:84] Creating CNI manager for ""
	I0610 07:07:54.081601    1707 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 07:07:54.086976    1707 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 07:07:54.090988    1707 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 07:07:54.099434    1707 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0610 07:07:54.110758    1707 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 07:07:54.119912    1707 system_pods.go:59] 7 kube-system pods found
	I0610 07:07:54.119925    1707 system_pods.go:61] "coredns-5d78c9869d-t6psw" [b1eaff8e-23c4-41d0-9ed9-cfd279442f52] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0610 07:07:54.119931    1707 system_pods.go:61] "etcd-functional-922000" [4cda82e2-d9ae-4bde-9f67-9223cc55806b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0610 07:07:54.119937    1707 system_pods.go:61] "kube-apiserver-functional-922000" [32edc365-f40f-49ca-9a1d-62f4a69cfe54] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0610 07:07:54.119947    1707 system_pods.go:61] "kube-controller-manager-functional-922000" [4a883002-ac7f-4e1a-9c9e-4de5a8543b6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0610 07:07:54.119951    1707 system_pods.go:61] "kube-proxy-85t2n" [e1ed72d1-7698-4522-808a-043e74348302] Running
	I0610 07:07:54.119955    1707 system_pods.go:61] "kube-scheduler-functional-922000" [fcb04c5a-d76d-424d-9055-85d14e00dca4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0610 07:07:54.119959    1707 system_pods.go:61] "storage-provisioner" [2bceeb6a-f269-4715-8ccd-234fa86f7c70] Running
	I0610 07:07:54.119962    1707 system_pods.go:74] duration metric: took 9.199458ms to wait for pod list to return data ...
	I0610 07:07:54.119970    1707 node_conditions.go:102] verifying NodePressure condition ...
	I0610 07:07:54.122364    1707 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0610 07:07:54.122375    1707 node_conditions.go:123] node cpu capacity is 2
	I0610 07:07:54.122382    1707 node_conditions.go:105] duration metric: took 2.409459ms to run NodePressure ...
	I0610 07:07:54.122391    1707 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0610 07:07:54.191825    1707 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0610 07:07:54.193953    1707 kubeadm.go:787] kubelet initialised
	I0610 07:07:54.193956    1707 kubeadm.go:788] duration metric: took 2.126167ms waiting for restarted kubelet to initialise ...
	I0610 07:07:54.193960    1707 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 07:07:54.196622    1707 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5d78c9869d-t6psw" in "kube-system" namespace to be "Ready" ...
	I0610 07:07:56.205103    1707 pod_ready.go:102] pod "coredns-5d78c9869d-t6psw" in "kube-system" namespace has status "Ready":"False"
	I0610 07:07:58.704789    1707 pod_ready.go:102] pod "coredns-5d78c9869d-t6psw" in "kube-system" namespace has status "Ready":"False"
	I0610 07:08:01.204430    1707 pod_ready.go:102] pod "coredns-5d78c9869d-t6psw" in "kube-system" namespace has status "Ready":"False"
	I0610 07:08:01.704316    1707 pod_ready.go:92] pod "coredns-5d78c9869d-t6psw" in "kube-system" namespace has status "Ready":"True"
	I0610 07:08:01.704322    1707 pod_ready.go:81] duration metric: took 7.507949792s waiting for pod "coredns-5d78c9869d-t6psw" in "kube-system" namespace to be "Ready" ...
	I0610 07:08:01.704325    1707 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-922000" in "kube-system" namespace to be "Ready" ...
	I0610 07:08:03.712781    1707 pod_ready.go:102] pod "etcd-functional-922000" in "kube-system" namespace has status "Ready":"False"
	I0610 07:08:06.214158    1707 pod_ready.go:92] pod "etcd-functional-922000" in "kube-system" namespace has status "Ready":"True"
	I0610 07:08:06.214169    1707 pod_ready.go:81] duration metric: took 4.50999125s waiting for pod "etcd-functional-922000" in "kube-system" namespace to be "Ready" ...
	I0610 07:08:06.214177    1707 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-922000" in "kube-system" namespace to be "Ready" ...
	I0610 07:08:06.218894    1707 pod_ready.go:92] pod "kube-apiserver-functional-922000" in "kube-system" namespace has status "Ready":"True"
	I0610 07:08:06.218899    1707 pod_ready.go:81] duration metric: took 4.717041ms waiting for pod "kube-apiserver-functional-922000" in "kube-system" namespace to be "Ready" ...
	I0610 07:08:06.218904    1707 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-922000" in "kube-system" namespace to be "Ready" ...
	I0610 07:08:08.238327    1707 pod_ready.go:102] pod "kube-controller-manager-functional-922000" in "kube-system" namespace has status "Ready":"False"
	I0610 07:08:09.730815    1707 pod_ready.go:92] pod "kube-controller-manager-functional-922000" in "kube-system" namespace has status "Ready":"True"
	I0610 07:08:09.730826    1707 pod_ready.go:81] duration metric: took 3.512034834s waiting for pod "kube-controller-manager-functional-922000" in "kube-system" namespace to be "Ready" ...
	I0610 07:08:09.730833    1707 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-85t2n" in "kube-system" namespace to be "Ready" ...
	I0610 07:08:09.735104    1707 pod_ready.go:92] pod "kube-proxy-85t2n" in "kube-system" namespace has status "Ready":"True"
	I0610 07:08:09.735108    1707 pod_ready.go:81] duration metric: took 4.271458ms waiting for pod "kube-proxy-85t2n" in "kube-system" namespace to be "Ready" ...
	I0610 07:08:09.735113    1707 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-922000" in "kube-system" namespace to be "Ready" ...
	I0610 07:08:09.739753    1707 pod_ready.go:92] pod "kube-scheduler-functional-922000" in "kube-system" namespace has status "Ready":"True"
	I0610 07:08:09.739758    1707 pod_ready.go:81] duration metric: took 4.642542ms waiting for pod "kube-scheduler-functional-922000" in "kube-system" namespace to be "Ready" ...
	I0610 07:08:09.739763    1707 pod_ready.go:38] duration metric: took 15.546325208s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 07:08:09.739775    1707 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 07:08:09.745251    1707 ops.go:34] apiserver oom_adj: -16
	I0610 07:08:09.745257    1707 kubeadm.go:640] restartCluster took 20.733870208s
	I0610 07:08:09.745261    1707 kubeadm.go:406] StartCluster complete in 20.743830458s
	I0610 07:08:09.745269    1707 settings.go:142] acquiring lock: {Name:mk4cd069708b06d9de03f9b5393c32ff96cdd016 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:08:09.745367    1707 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:08:09.745792    1707 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/kubeconfig: {Name:mkac2e0f9c3956b550c91557119bdbcf28863bf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:08:09.746041    1707 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0610 07:08:09.746081    1707 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0610 07:08:09.746132    1707 addons.go:66] Setting storage-provisioner=true in profile "functional-922000"
	I0610 07:08:09.746139    1707 addons.go:228] Setting addon storage-provisioner=true in "functional-922000"
	W0610 07:08:09.746142    1707 addons.go:237] addon storage-provisioner should already be in state true
	I0610 07:08:09.746167    1707 host.go:66] Checking if "functional-922000" exists ...
	I0610 07:08:09.746177    1707 addons.go:66] Setting default-storageclass=true in profile "functional-922000"
	I0610 07:08:09.746185    1707 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-922000"
	I0610 07:08:09.746217    1707 config.go:182] Loaded profile config "functional-922000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	W0610 07:08:09.746531    1707 host.go:54] host status for "functional-922000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15074-894/.minikube/machines/functional-922000/monitor: connect: connection refused
	W0610 07:08:09.746539    1707 addons_storage_classes.go:55] "functional-922000" is not running, writing default-storageclass=true to disk and skipping enablement
	I0610 07:08:09.746541    1707 addons.go:228] Setting addon default-storageclass=true in "functional-922000"
	W0610 07:08:09.746543    1707 addons.go:237] addon default-storageclass should already be in state true
	I0610 07:08:09.746552    1707 host.go:66] Checking if "functional-922000" exists ...
	I0610 07:08:09.751061    1707 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 07:08:09.754151    1707 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 07:08:09.754155    1707 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 07:08:09.754163    1707 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/functional-922000/id_rsa Username:docker}
	I0610 07:08:09.754673    1707 kapi.go:248] "coredns" deployment in "kube-system" namespace and "functional-922000" context rescaled to 1 replicas
	I0610 07:08:09.754685    1707 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:08:09.758891    1707 out.go:177] * Verifying Kubernetes components...
	I0610 07:08:09.755065    1707 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 07:08:09.767053    1707 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 07:08:09.767062    1707 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/functional-922000/id_rsa Username:docker}
	I0610 07:08:09.767118    1707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 07:08:09.788423    1707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 07:08:09.806167    1707 node_ready.go:35] waiting up to 6m0s for node "functional-922000" to be "Ready" ...
	I0610 07:08:09.806301    1707 start.go:889] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0610 07:08:09.807559    1707 node_ready.go:49] node "functional-922000" has status "Ready":"True"
	I0610 07:08:09.807563    1707 node_ready.go:38] duration metric: took 1.388ms waiting for node "functional-922000" to be "Ready" ...
	I0610 07:08:09.807565    1707 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 07:08:09.811496    1707 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5d78c9869d-t6psw" in "kube-system" namespace to be "Ready" ...
	I0610 07:08:09.814083    1707 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 07:08:09.814422    1707 pod_ready.go:92] pod "coredns-5d78c9869d-t6psw" in "kube-system" namespace has status "Ready":"True"
	I0610 07:08:09.814425    1707 pod_ready.go:81] duration metric: took 2.921292ms waiting for pod "coredns-5d78c9869d-t6psw" in "kube-system" namespace to be "Ready" ...
	I0610 07:08:09.814432    1707 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-922000" in "kube-system" namespace to be "Ready" ...
	I0610 07:08:10.167751    1707 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0610 07:08:10.171794    1707 addons.go:499] enable addons completed in 425.728208ms: enabled=[storage-provisioner default-storageclass]
	I0610 07:08:10.212497    1707 pod_ready.go:92] pod "etcd-functional-922000" in "kube-system" namespace has status "Ready":"True"
	I0610 07:08:10.212500    1707 pod_ready.go:81] duration metric: took 398.079583ms waiting for pod "etcd-functional-922000" in "kube-system" namespace to be "Ready" ...
	I0610 07:08:10.212504    1707 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-922000" in "kube-system" namespace to be "Ready" ...
	I0610 07:08:10.615166    1707 pod_ready.go:92] pod "kube-apiserver-functional-922000" in "kube-system" namespace has status "Ready":"True"
	I0610 07:08:10.615177    1707 pod_ready.go:81] duration metric: took 402.682833ms waiting for pod "kube-apiserver-functional-922000" in "kube-system" namespace to be "Ready" ...
	I0610 07:08:10.615188    1707 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-922000" in "kube-system" namespace to be "Ready" ...
	I0610 07:08:11.013082    1707 pod_ready.go:92] pod "kube-controller-manager-functional-922000" in "kube-system" namespace has status "Ready":"True"
	I0610 07:08:11.013092    1707 pod_ready.go:81] duration metric: took 397.910417ms waiting for pod "kube-controller-manager-functional-922000" in "kube-system" namespace to be "Ready" ...
	I0610 07:08:11.013102    1707 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-85t2n" in "kube-system" namespace to be "Ready" ...
	I0610 07:08:11.414060    1707 pod_ready.go:92] pod "kube-proxy-85t2n" in "kube-system" namespace has status "Ready":"True"
	I0610 07:08:11.414068    1707 pod_ready.go:81] duration metric: took 400.973625ms waiting for pod "kube-proxy-85t2n" in "kube-system" namespace to be "Ready" ...
	I0610 07:08:11.414076    1707 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-922000" in "kube-system" namespace to be "Ready" ...
	I0610 07:08:11.818854    1707 pod_ready.go:92] pod "kube-scheduler-functional-922000" in "kube-system" namespace has status "Ready":"True"
	I0610 07:08:11.818880    1707 pod_ready.go:81] duration metric: took 404.805333ms waiting for pod "kube-scheduler-functional-922000" in "kube-system" namespace to be "Ready" ...
	I0610 07:08:11.818901    1707 pod_ready.go:38] duration metric: took 2.011395625s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 07:08:11.818960    1707 api_server.go:52] waiting for apiserver process to appear ...
	I0610 07:08:11.819298    1707 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 07:08:11.836064    1707 api_server.go:72] duration metric: took 2.081433917s to wait for apiserver process to appear ...
	I0610 07:08:11.836072    1707 api_server.go:88] waiting for apiserver healthz status ...
	I0610 07:08:11.836085    1707 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0610 07:08:11.844542    1707 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0610 07:08:11.846554    1707 api_server.go:141] control plane version: v1.27.2
	I0610 07:08:11.846565    1707 api_server.go:131] duration metric: took 10.487834ms to wait for apiserver health ...
	I0610 07:08:11.846570    1707 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 07:08:12.022799    1707 system_pods.go:59] 7 kube-system pods found
	I0610 07:08:12.022826    1707 system_pods.go:61] "coredns-5d78c9869d-t6psw" [b1eaff8e-23c4-41d0-9ed9-cfd279442f52] Running
	I0610 07:08:12.022832    1707 system_pods.go:61] "etcd-functional-922000" [4cda82e2-d9ae-4bde-9f67-9223cc55806b] Running
	I0610 07:08:12.022838    1707 system_pods.go:61] "kube-apiserver-functional-922000" [32edc365-f40f-49ca-9a1d-62f4a69cfe54] Running
	I0610 07:08:12.022845    1707 system_pods.go:61] "kube-controller-manager-functional-922000" [4a883002-ac7f-4e1a-9c9e-4de5a8543b6d] Running
	I0610 07:08:12.022851    1707 system_pods.go:61] "kube-proxy-85t2n" [e1ed72d1-7698-4522-808a-043e74348302] Running
	I0610 07:08:12.022857    1707 system_pods.go:61] "kube-scheduler-functional-922000" [fcb04c5a-d76d-424d-9055-85d14e00dca4] Running
	I0610 07:08:12.022863    1707 system_pods.go:61] "storage-provisioner" [2bceeb6a-f269-4715-8ccd-234fa86f7c70] Running
	I0610 07:08:12.022875    1707 system_pods.go:74] duration metric: took 176.304125ms to wait for pod list to return data ...
	I0610 07:08:12.022886    1707 default_sa.go:34] waiting for default service account to be created ...
	I0610 07:08:12.218808    1707 default_sa.go:45] found service account: "default"
	I0610 07:08:12.218828    1707 default_sa.go:55] duration metric: took 195.939041ms for default service account to be created ...
	I0610 07:08:12.218880    1707 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 07:08:12.423766    1707 system_pods.go:86] 7 kube-system pods found
	I0610 07:08:12.423790    1707 system_pods.go:89] "coredns-5d78c9869d-t6psw" [b1eaff8e-23c4-41d0-9ed9-cfd279442f52] Running
	I0610 07:08:12.423798    1707 system_pods.go:89] "etcd-functional-922000" [4cda82e2-d9ae-4bde-9f67-9223cc55806b] Running
	I0610 07:08:12.423804    1707 system_pods.go:89] "kube-apiserver-functional-922000" [32edc365-f40f-49ca-9a1d-62f4a69cfe54] Running
	I0610 07:08:12.423810    1707 system_pods.go:89] "kube-controller-manager-functional-922000" [4a883002-ac7f-4e1a-9c9e-4de5a8543b6d] Running
	I0610 07:08:12.423816    1707 system_pods.go:89] "kube-proxy-85t2n" [e1ed72d1-7698-4522-808a-043e74348302] Running
	I0610 07:08:12.423822    1707 system_pods.go:89] "kube-scheduler-functional-922000" [fcb04c5a-d76d-424d-9055-85d14e00dca4] Running
	I0610 07:08:12.423827    1707 system_pods.go:89] "storage-provisioner" [2bceeb6a-f269-4715-8ccd-234fa86f7c70] Running
	I0610 07:08:12.423844    1707 system_pods.go:126] duration metric: took 204.963292ms to wait for k8s-apps to be running ...
	I0610 07:08:12.423854    1707 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 07:08:12.424067    1707 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 07:08:12.442089    1707 system_svc.go:56] duration metric: took 18.220542ms WaitForService to wait for kubelet.
	I0610 07:08:12.442110    1707 kubeadm.go:581] duration metric: took 2.687502208s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0610 07:08:12.442134    1707 node_conditions.go:102] verifying NodePressure condition ...
	I0610 07:08:12.613684    1707 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0610 07:08:12.613693    1707 node_conditions.go:123] node cpu capacity is 2
	I0610 07:08:12.613701    1707 node_conditions.go:105] duration metric: took 171.569375ms to run NodePressure ...
	I0610 07:08:12.613709    1707 start.go:228] waiting for startup goroutines ...
	I0610 07:08:12.613714    1707 start.go:233] waiting for cluster config update ...
	I0610 07:08:12.613722    1707 start.go:242] writing updated cluster config ...
	I0610 07:08:12.614223    1707 ssh_runner.go:195] Run: rm -f paused
	I0610 07:08:12.654907    1707 start.go:573] kubectl: 1.25.9, cluster: 1.27.2 (minor skew: 2)
	I0610 07:08:12.659237    1707 out.go:177] 
	W0610 07:08:12.663256    1707 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.2.
	I0610 07:08:12.666190    1707 out.go:177]   - Want kubectl v1.27.2? Try 'minikube kubectl -- get pods -A'
	I0610 07:08:12.673181    1707 out.go:177] * Done! kubectl is now configured to use "functional-922000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sat 2023-06-10 14:06:05 UTC, ends at Sat 2023-06-10 14:09:07 UTC. --
	Jun 10 14:08:47 functional-922000 dockerd[6669]: time="2023-06-10T14:08:47.083444119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 14:08:47 functional-922000 dockerd[6669]: time="2023-06-10T14:08:47.083471036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 14:08:47 functional-922000 cri-dockerd[6927]: time="2023-06-10T14:08:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/30fc1d5b1924cd1016c6930769f1b722335483bedab844ef360788e080156de2/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jun 10 14:08:49 functional-922000 cri-dockerd[6927]: time="2023-06-10T14:08:49Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	Jun 10 14:08:49 functional-922000 dockerd[6669]: time="2023-06-10T14:08:49.801568436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 14:08:49 functional-922000 dockerd[6669]: time="2023-06-10T14:08:49.801597644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 14:08:49 functional-922000 dockerd[6669]: time="2023-06-10T14:08:49.801608019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 14:08:49 functional-922000 dockerd[6669]: time="2023-06-10T14:08:49.801640185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 14:08:49 functional-922000 dockerd[6663]: time="2023-06-10T14:08:49.853187472Z" level=info msg="ignoring event" container=d948b228cad44b29db4c3f34446304c66efa58d5a9ec21e523314d6682606312 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 14:08:49 functional-922000 dockerd[6669]: time="2023-06-10T14:08:49.853253263Z" level=info msg="shim disconnected" id=d948b228cad44b29db4c3f34446304c66efa58d5a9ec21e523314d6682606312 namespace=moby
	Jun 10 14:08:49 functional-922000 dockerd[6669]: time="2023-06-10T14:08:49.853279804Z" level=warning msg="cleaning up after shim disconnected" id=d948b228cad44b29db4c3f34446304c66efa58d5a9ec21e523314d6682606312 namespace=moby
	Jun 10 14:08:49 functional-922000 dockerd[6669]: time="2023-06-10T14:08:49.853283679Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 14:08:49 functional-922000 dockerd[6669]: time="2023-06-10T14:08:49.867235217Z" level=warning msg="cleanup warnings time=\"2023-06-10T14:08:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jun 10 14:08:51 functional-922000 dockerd[6663]: time="2023-06-10T14:08:51.770217843Z" level=info msg="ignoring event" container=30fc1d5b1924cd1016c6930769f1b722335483bedab844ef360788e080156de2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 14:08:51 functional-922000 dockerd[6669]: time="2023-06-10T14:08:51.770415424Z" level=info msg="shim disconnected" id=30fc1d5b1924cd1016c6930769f1b722335483bedab844ef360788e080156de2 namespace=moby
	Jun 10 14:08:51 functional-922000 dockerd[6669]: time="2023-06-10T14:08:51.770517756Z" level=warning msg="cleaning up after shim disconnected" id=30fc1d5b1924cd1016c6930769f1b722335483bedab844ef360788e080156de2 namespace=moby
	Jun 10 14:08:51 functional-922000 dockerd[6669]: time="2023-06-10T14:08:51.770523797Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 14:08:53 functional-922000 dockerd[6669]: time="2023-06-10T14:08:53.826121521Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 14:08:53 functional-922000 dockerd[6669]: time="2023-06-10T14:08:53.826153104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 14:08:53 functional-922000 dockerd[6669]: time="2023-06-10T14:08:53.826161812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 14:08:53 functional-922000 dockerd[6669]: time="2023-06-10T14:08:53.826167645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 14:08:53 functional-922000 dockerd[6669]: time="2023-06-10T14:08:53.866373876Z" level=info msg="shim disconnected" id=09b5d791f3e61fc2287f795d516de7adcf6a6219e7dea72ef376f9165f616330 namespace=moby
	Jun 10 14:08:53 functional-922000 dockerd[6669]: time="2023-06-10T14:08:53.866403375Z" level=warning msg="cleaning up after shim disconnected" id=09b5d791f3e61fc2287f795d516de7adcf6a6219e7dea72ef376f9165f616330 namespace=moby
	Jun 10 14:08:53 functional-922000 dockerd[6669]: time="2023-06-10T14:08:53.866407709Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 14:08:53 functional-922000 dockerd[6663]: time="2023-06-10T14:08:53.866536957Z" level=info msg="ignoring event" container=09b5d791f3e61fc2287f795d516de7adcf6a6219e7dea72ef376f9165f616330 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	09b5d791f3e61       72565bf5bbedf                                                                                         14 seconds ago       Exited              echoserver-arm            2                   745f80bccea3d
	d948b228cad44       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   18 seconds ago       Exited              mount-munger              0                   30fc1d5b1924c
	981ece83357f9       72565bf5bbedf                                                                                         24 seconds ago       Exited              echoserver-arm            2                   daa62ebf09662
	e7c6123c452f4       nginx@sha256:af296b188c7b7df99ba960ca614439c99cb7cf252ed7bbc23e90cfda59092305                         35 seconds ago       Running             myfrontend                0                   fe44eb8f31986
	6db943a1785cb       nginx@sha256:2e776a66a3556f001aba13431b26e448fe8acba277bf93d2ab1a785571a46d90                         50 seconds ago       Running             nginx                     0                   2d5a987eb63ab
	367cd1fbac131       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       1                   792e463185e64
	4652118bce881       97e04611ad434                                                                                         About a minute ago   Running             coredns                   2                   3a3907b1cba23
	f1b9bf9b5ad6c       29921a0845422                                                                                         About a minute ago   Running             kube-proxy                2                   f86ac266646a5
	6494094dba459       72c9df6be7f1b                                                                                         About a minute ago   Running             kube-apiserver            0                   110bc1c05e5cc
	7a510d1cbb126       24bc64e911039                                                                                         About a minute ago   Running             etcd                      2                   cd83dc191efb2
	9edd9fd5235dd       305d7ed1dae28                                                                                         About a minute ago   Running             kube-scheduler            2                   f4e74ceebecfe
	28afe2745e6a1       2ee705380c3c5                                                                                         About a minute ago   Running             kube-controller-manager   2                   0e872bfab2a36
	e8b4e63066a8b       ba04bb24b9575                                                                                         About a minute ago   Exited              storage-provisioner       0                   74ce844d9a434
	fad9fe01cae01       97e04611ad434                                                                                         2 minutes ago        Exited              coredns                   1                   6f5e013f71e7b
	434eda920b8b2       24bc64e911039                                                                                         2 minutes ago        Exited              etcd                      1                   d7485d9710085
	5b4a1dd90aab8       305d7ed1dae28                                                                                         2 minutes ago        Exited              kube-scheduler            1                   e5b53bc528056
	1eee634b3e32c       29921a0845422                                                                                         2 minutes ago        Exited              kube-proxy                1                   94caa5af9f567
	b6506990de022       2ee705380c3c5                                                                                         2 minutes ago        Exited              kube-controller-manager   1                   1851aae1c841f
	
	* 
	* ==> coredns [4652118bce88] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39620 - 1210 "HINFO IN 4225525387558143024.7918918146938912235. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004542152s
	[INFO] 10.244.0.1:48968 - 12785 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000094958s
	[INFO] 10.244.0.1:41685 - 32557 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000100708s
	[INFO] 10.244.0.1:25387 - 38476 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000032375s
	[INFO] 10.244.0.1:39906 - 23676 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.00099737s
	[INFO] 10.244.0.1:45412 - 14862 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000069749s
	[INFO] 10.244.0.1:20063 - 133 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000106332s
	
	* 
	* ==> coredns [fad9fe01cae0] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59588 - 63894 "HINFO IN 6217644684582007964.1335869511756623057. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.004224007s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               functional-922000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-922000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3273891fc7fc0f39c65075197baa2d52fc489f6f
	                    minikube.k8s.io/name=functional-922000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_10T07_06_23_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jun 2023 14:06:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-922000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jun 2023 14:09:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jun 2023 14:08:54 +0000   Sat, 10 Jun 2023 14:06:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jun 2023 14:08:54 +0000   Sat, 10 Jun 2023 14:06:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jun 2023 14:08:54 +0000   Sat, 10 Jun 2023 14:06:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jun 2023 14:08:54 +0000   Sat, 10 Jun 2023 14:06:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-922000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905780Ki
	  pods:               110
	System Info:
	  Machine ID:                 bbaa5f0181134df786eef812b971802e
	  System UUID:                bbaa5f0181134df786eef812b971802e
	  Boot ID:                    069b4c2f-b02b-4a8f-a761-cd30d5f94b57
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-7b684b55f9-tdk7f                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	  default                     hello-node-connect-58d66798bb-gr9nx          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  kube-system                 coredns-5d78c9869d-t6psw                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m30s
	  kube-system                 etcd-functional-922000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m44s
	  kube-system                 kube-apiserver-functional-922000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         75s
	  kube-system                 kube-controller-manager-functional-922000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 kube-proxy-85t2n                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 kube-scheduler-functional-922000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m30s                  kube-proxy       
	  Normal  Starting                 74s                    kube-proxy       
	  Normal  Starting                 119s                   kube-proxy       
	  Normal  NodeHasNoDiskPressure    2m49s (x8 over 2m49s)  kubelet          Node functional-922000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m49s (x8 over 2m49s)  kubelet          Node functional-922000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m49s (x7 over 2m49s)  kubelet          Node functional-922000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m44s                  kubelet          Node functional-922000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m44s                  kubelet          Node functional-922000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m44s                  kubelet          Node functional-922000 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m44s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m42s                  kubelet          Node functional-922000 status is now: NodeReady
	  Normal  RegisteredNode           2m31s                  node-controller  Node functional-922000 event: Registered Node functional-922000 in Controller
	  Normal  RegisteredNode           107s                   node-controller  Node functional-922000 event: Registered Node functional-922000 in Controller
	  Normal  NodeHasNoDiskPressure    78s (x8 over 78s)      kubelet          Node functional-922000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  78s (x8 over 78s)      kubelet          Node functional-922000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 78s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     78s (x7 over 78s)      kubelet          Node functional-922000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  78s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           62s                    node-controller  Node functional-922000 event: Registered Node functional-922000 in Controller
	
	* 
	* ==> dmesg <==
	* [  +2.453168] systemd-fstab-generator[3774]: Ignoring "noauto" for root device
	[  +0.123951] systemd-fstab-generator[3819]: Ignoring "noauto" for root device
	[  +0.081697] systemd-fstab-generator[3830]: Ignoring "noauto" for root device
	[  +0.079465] systemd-fstab-generator[3843]: Ignoring "noauto" for root device
	[Jun10 14:07] systemd-fstab-generator[4348]: Ignoring "noauto" for root device
	[  +0.065454] systemd-fstab-generator[4359]: Ignoring "noauto" for root device
	[  +0.066900] systemd-fstab-generator[4370]: Ignoring "noauto" for root device
	[  +0.060626] systemd-fstab-generator[4381]: Ignoring "noauto" for root device
	[  +0.091952] systemd-fstab-generator[4451]: Ignoring "noauto" for root device
	[  +6.092511] kauditd_printk_skb: 34 callbacks suppressed
	[ +28.500731] systemd-fstab-generator[6200]: Ignoring "noauto" for root device
	[  +0.139083] systemd-fstab-generator[6234]: Ignoring "noauto" for root device
	[  +0.081503] systemd-fstab-generator[6245]: Ignoring "noauto" for root device
	[  +0.100001] systemd-fstab-generator[6258]: Ignoring "noauto" for root device
	[ +11.472129] systemd-fstab-generator[6813]: Ignoring "noauto" for root device
	[  +0.068493] systemd-fstab-generator[6824]: Ignoring "noauto" for root device
	[  +0.066724] systemd-fstab-generator[6835]: Ignoring "noauto" for root device
	[  +0.067615] systemd-fstab-generator[6846]: Ignoring "noauto" for root device
	[  +0.078608] systemd-fstab-generator[6920]: Ignoring "noauto" for root device
	[  +0.851753] systemd-fstab-generator[7170]: Ignoring "noauto" for root device
	[  +3.856024] kauditd_printk_skb: 34 callbacks suppressed
	[Jun10 14:08] kauditd_printk_skb: 1 callbacks suppressed
	[  +7.133818] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[ +11.687313] kauditd_printk_skb: 1 callbacks suppressed
	[  +8.620993] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [434eda920b8b] <==
	* {"level":"info","ts":"2023-06-10T14:07:06.325Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-06-10T14:07:06.325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2023-06-10T14:07:06.325Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-06-10T14:07:06.325Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T14:07:06.325Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T14:07:07.399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2023-06-10T14:07:07.399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-06-10T14:07:07.399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2023-06-10T14:07:07.399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2023-06-10T14:07:07.399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-06-10T14:07:07.399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2023-06-10T14:07:07.400Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-06-10T14:07:07.402Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-922000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-10T14:07:07.402Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T14:07:07.402Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T14:07:07.403Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-10T14:07:07.403Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-06-10T14:07:07.404Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-10T14:07:07.404Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-10T14:07:37.140Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-06-10T14:07:37.140Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"functional-922000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"info","ts":"2023-06-10T14:07:37.150Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2023-06-10T14:07:37.152Z","caller":"embed/etcd.go:568","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-06-10T14:07:37.153Z","caller":"embed/etcd.go:573","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-06-10T14:07:37.153Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"functional-922000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	* 
	* ==> etcd [7a510d1cbb12] <==
	* {"level":"info","ts":"2023-06-10T14:07:50.789Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-06-10T14:07:50.789Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-06-10T14:07:50.789Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-10T14:07:50.789Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-10T14:07:50.789Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-06-10T14:07:50.789Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-06-10T14:07:50.789Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2023-06-10T14:07:50.790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2023-06-10T14:07:50.790Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2023-06-10T14:07:50.790Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T14:07:50.790Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T14:07:51.974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2023-06-10T14:07:51.974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-06-10T14:07:51.974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2023-06-10T14:07:51.974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2023-06-10T14:07:51.974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-06-10T14:07:51.974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2023-06-10T14:07:51.974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2023-06-10T14:07:51.977Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T14:07:51.977Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-922000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-10T14:07:51.978Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T14:07:51.980Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2023-06-10T14:07:51.981Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-10T14:07:51.981Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-10T14:07:51.981Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  14:09:07 up 3 min,  0 users,  load average: 0.87, 0.50, 0.20
	Linux functional-922000 5.10.57 #1 SMP PREEMPT Wed Jun 7 01:52:34 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [6494094dba45] <==
	* I0610 14:07:52.642852       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 14:07:52.677768       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	E0610 14:07:52.678108       1 controller.go:155] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0610 14:07:52.708128       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0610 14:07:52.708253       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0610 14:07:52.708268       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0610 14:07:52.708284       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0610 14:07:52.708819       1 cache.go:39] Caches are synced for autoregister controller
	I0610 14:07:52.709022       1 shared_informer.go:318] Caches are synced for configmaps
	I0610 14:07:52.709165       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0610 14:07:52.709378       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0610 14:07:52.710463       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0610 14:07:53.480508       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0610 14:07:53.610744       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0610 14:07:54.261957       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0610 14:07:54.265173       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0610 14:07:54.276566       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0610 14:07:54.283986       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 14:07:54.286242       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0610 14:08:05.667086       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 14:08:05.681923       1 controller.go:624] quota admission added evaluator for: endpoints
	I0610 14:08:14.728810       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs=map[IPv4:10.111.190.35]
	I0610 14:08:25.107091       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0610 14:08:25.149070       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs=map[IPv4:10.103.105.110]
	I0610 14:08:38.568528       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs=map[IPv4:10.104.220.189]
	
	* 
	* ==> kube-controller-manager [28afe2745e6a] <==
	* I0610 14:08:05.669535       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0610 14:08:05.670575       1 shared_informer.go:318] Caches are synced for ephemeral
	I0610 14:08:05.672354       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0610 14:08:05.672374       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0610 14:08:05.677540       1 shared_informer.go:318] Caches are synced for endpoint
	I0610 14:08:05.679774       1 shared_informer.go:318] Caches are synced for GC
	I0610 14:08:05.680887       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0610 14:08:05.683111       1 shared_informer.go:318] Caches are synced for persistent volume
	I0610 14:08:05.684721       1 shared_informer.go:318] Caches are synced for attach detach
	I0610 14:08:05.695766       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0610 14:08:05.711607       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0610 14:08:05.804136       1 shared_informer.go:318] Caches are synced for resource quota
	I0610 14:08:05.805294       1 shared_informer.go:318] Caches are synced for deployment
	I0610 14:08:05.810800       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0610 14:08:05.810819       1 shared_informer.go:318] Caches are synced for disruption
	I0610 14:08:05.873172       1 shared_informer.go:318] Caches are synced for resource quota
	I0610 14:08:06.188300       1 shared_informer.go:318] Caches are synced for garbage collector
	I0610 14:08:06.267150       1 shared_informer.go:318] Caches are synced for garbage collector
	I0610 14:08:06.267204       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0610 14:08:19.613722       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0610 14:08:19.614017       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="waiting for a volume to be created, either by external provisioner \"k8s.io/minikube-hostpath\" or manually created by system administrator"
	I0610 14:08:25.108517       1 event.go:307] "Event occurred" object="default/hello-node-connect" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-connect-58d66798bb to 1"
	I0610 14:08:25.116774       1 event.go:307] "Event occurred" object="default/hello-node-connect-58d66798bb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-connect-58d66798bb-gr9nx"
	I0610 14:08:38.518028       1 event.go:307] "Event occurred" object="default/hello-node" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-node-7b684b55f9 to 1"
	I0610 14:08:38.521344       1 event.go:307] "Event occurred" object="default/hello-node-7b684b55f9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-node-7b684b55f9-tdk7f"
	
	* 
	* ==> kube-controller-manager [b6506990de02] <==
	* I0610 14:07:21.026317       1 shared_informer.go:318] Caches are synced for service account
	I0610 14:07:21.026355       1 shared_informer.go:318] Caches are synced for TTL
	I0610 14:07:21.026308       1 shared_informer.go:318] Caches are synced for expand
	I0610 14:07:21.048043       1 shared_informer.go:318] Caches are synced for PV protection
	I0610 14:07:21.048077       1 shared_informer.go:318] Caches are synced for node
	I0610 14:07:21.048118       1 range_allocator.go:174] "Sending events to api server"
	I0610 14:07:21.048153       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0610 14:07:21.048156       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0610 14:07:21.048159       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0610 14:07:21.055437       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0610 14:07:21.077630       1 shared_informer.go:318] Caches are synced for HPA
	I0610 14:07:21.077671       1 shared_informer.go:318] Caches are synced for attach detach
	I0610 14:07:21.077706       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0610 14:07:21.077765       1 shared_informer.go:318] Caches are synced for stateful set
	I0610 14:07:21.077819       1 shared_informer.go:318] Caches are synced for bootstrap_signer
	I0610 14:07:21.078100       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I0610 14:07:21.128010       1 shared_informer.go:318] Caches are synced for disruption
	I0610 14:07:21.159442       1 shared_informer.go:318] Caches are synced for resource quota
	I0610 14:07:21.177931       1 shared_informer.go:318] Caches are synced for persistent volume
	I0610 14:07:21.187140       1 shared_informer.go:318] Caches are synced for resource quota
	I0610 14:07:21.192392       1 shared_informer.go:318] Caches are synced for deployment
	I0610 14:07:21.197594       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0610 14:07:21.588869       1 shared_informer.go:318] Caches are synced for garbage collector
	I0610 14:07:21.601728       1 shared_informer.go:318] Caches are synced for garbage collector
	I0610 14:07:21.601750       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [1eee634b3e32] <==
	* I0610 14:07:08.086121       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0610 14:07:08.086178       1 server_others.go:110] "Detected node IP" address="192.168.105.4"
	I0610 14:07:08.086249       1 server_others.go:551] "Using iptables proxy"
	I0610 14:07:08.099040       1 server_others.go:176] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0610 14:07:08.099050       1 server_others.go:190] "Using iptables Proxier"
	I0610 14:07:08.099065       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 14:07:08.099230       1 server.go:657] "Version info" version="v1.27.2"
	I0610 14:07:08.099234       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 14:07:08.099728       1 config.go:315] "Starting node config controller"
	I0610 14:07:08.099737       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0610 14:07:08.099851       1 config.go:188] "Starting service config controller"
	I0610 14:07:08.099875       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0610 14:07:08.099896       1 config.go:97] "Starting endpoint slice config controller"
	I0610 14:07:08.099913       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0610 14:07:08.200501       1 shared_informer.go:318] Caches are synced for node config
	I0610 14:07:08.200640       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0610 14:07:08.206564       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-proxy [f1b9bf9b5ad6] <==
	* I0610 14:07:53.394372       1 node.go:141] Successfully retrieved node IP: 192.168.105.4
	I0610 14:07:53.394405       1 server_others.go:110] "Detected node IP" address="192.168.105.4"
	I0610 14:07:53.394490       1 server_others.go:551] "Using iptables proxy"
	I0610 14:07:53.405065       1 server_others.go:176] "kube-proxy running in single-stack mode: secondary ipFamily is not supported" ipFamily=IPv6
	I0610 14:07:53.405077       1 server_others.go:190] "Using iptables Proxier"
	I0610 14:07:53.405138       1 proxier.go:253] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0610 14:07:53.405325       1 server.go:657] "Version info" version="v1.27.2"
	I0610 14:07:53.405356       1 server.go:659] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 14:07:53.405659       1 config.go:188] "Starting service config controller"
	I0610 14:07:53.405670       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0610 14:07:53.405714       1 config.go:97] "Starting endpoint slice config controller"
	I0610 14:07:53.405720       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0610 14:07:53.405947       1 config.go:315] "Starting node config controller"
	I0610 14:07:53.405969       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0610 14:07:53.506536       1 shared_informer.go:318] Caches are synced for service config
	I0610 14:07:53.506535       1 shared_informer.go:318] Caches are synced for node config
	I0610 14:07:53.506544       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [5b4a1dd90aab] <==
	* I0610 14:07:06.602981       1 serving.go:348] Generated self-signed cert in-memory
	W0610 14:07:08.047678       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0610 14:07:08.047740       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 14:07:08.047754       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0610 14:07:08.047761       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0610 14:07:08.085767       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.2"
	I0610 14:07:08.085783       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 14:07:08.086616       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0610 14:07:08.086668       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 14:07:08.088385       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0610 14:07:08.088434       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 14:07:08.186991       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 14:07:37.160164       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0610 14:07:37.160326       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E0610 14:07:37.160362       1 scheduling_queue.go:1135] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	E0610 14:07:37.160374       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [9edd9fd5235d] <==
	* I0610 14:07:51.274732       1 serving.go:348] Generated self-signed cert in-memory
	W0610 14:07:52.651013       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0610 14:07:52.651157       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 14:07:52.651174       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0610 14:07:52.651182       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0610 14:07:52.679373       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.27.2"
	I0610 14:07:52.679389       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 14:07:52.680422       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0610 14:07:52.681987       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0610 14:07:52.681997       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0610 14:07:52.682005       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 14:07:52.782181       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sat 2023-06-10 14:06:05 UTC, ends at Sat 2023-06-10 14:09:08 UTC. --
	Jun 10 14:08:43 functional-922000 kubelet[7176]: I0610 14:08:43.786945    7176 scope.go:115] "RemoveContainer" containerID="c852e8949bad4397a13ea6a947d8b9f7742b196b74a0d09599b47d0d7c56c5a7"
	Jun 10 14:08:44 functional-922000 kubelet[7176]: I0610 14:08:44.598044    7176 scope.go:115] "RemoveContainer" containerID="c852e8949bad4397a13ea6a947d8b9f7742b196b74a0d09599b47d0d7c56c5a7"
	Jun 10 14:08:44 functional-922000 kubelet[7176]: I0610 14:08:44.598387    7176 scope.go:115] "RemoveContainer" containerID="981ece83357f9ab2a457ed44e59a65c4f413e82ae86f1e0f79fc5fec22fe86f9"
	Jun 10 14:08:44 functional-922000 kubelet[7176]: E0610 14:08:44.598606    7176 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-58d66798bb-gr9nx_default(7f381893-b5d6-4dde-b2f2-7a091639bf9f)\"" pod="default/hello-node-connect-58d66798bb-gr9nx" podUID=7f381893-b5d6-4dde-b2f2-7a091639bf9f
	Jun 10 14:08:46 functional-922000 kubelet[7176]: I0610 14:08:46.717608    7176 topology_manager.go:212] "Topology Admit Handler"
	Jun 10 14:08:46 functional-922000 kubelet[7176]: I0610 14:08:46.848411    7176 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/3312e9f9-36d0-42a8-9de2-5b88bb3da415-test-volume\") pod \"busybox-mount\" (UID: \"3312e9f9-36d0-42a8-9de2-5b88bb3da415\") " pod="default/busybox-mount"
	Jun 10 14:08:46 functional-922000 kubelet[7176]: I0610 14:08:46.848488    7176 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56tpn\" (UniqueName: \"kubernetes.io/projected/3312e9f9-36d0-42a8-9de2-5b88bb3da415-kube-api-access-56tpn\") pod \"busybox-mount\" (UID: \"3312e9f9-36d0-42a8-9de2-5b88bb3da415\") " pod="default/busybox-mount"
	Jun 10 14:08:49 functional-922000 kubelet[7176]: E0610 14:08:49.808728    7176 iptables.go:575] "Could not set up iptables canary" err=<
	Jun 10 14:08:49 functional-922000 kubelet[7176]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jun 10 14:08:49 functional-922000 kubelet[7176]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jun 10 14:08:49 functional-922000 kubelet[7176]:  > table=nat chain=KUBE-KUBELET-CANARY
	Jun 10 14:08:49 functional-922000 kubelet[7176]: I0610 14:08:49.866569    7176 scope.go:115] "RemoveContainer" containerID="e0a6e45c334ff7633ab10c77aec76585a44e064925cb02712cd62c44ff245059"
	Jun 10 14:08:51 functional-922000 kubelet[7176]: I0610 14:08:51.916308    7176 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56tpn\" (UniqueName: \"kubernetes.io/projected/3312e9f9-36d0-42a8-9de2-5b88bb3da415-kube-api-access-56tpn\") pod \"3312e9f9-36d0-42a8-9de2-5b88bb3da415\" (UID: \"3312e9f9-36d0-42a8-9de2-5b88bb3da415\") "
	Jun 10 14:08:51 functional-922000 kubelet[7176]: I0610 14:08:51.916342    7176 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/3312e9f9-36d0-42a8-9de2-5b88bb3da415-test-volume\") pod \"3312e9f9-36d0-42a8-9de2-5b88bb3da415\" (UID: \"3312e9f9-36d0-42a8-9de2-5b88bb3da415\") "
	Jun 10 14:08:51 functional-922000 kubelet[7176]: I0610 14:08:51.916391    7176 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3312e9f9-36d0-42a8-9de2-5b88bb3da415-test-volume" (OuterVolumeSpecName: "test-volume") pod "3312e9f9-36d0-42a8-9de2-5b88bb3da415" (UID: "3312e9f9-36d0-42a8-9de2-5b88bb3da415"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Jun 10 14:08:51 functional-922000 kubelet[7176]: I0610 14:08:51.917155    7176 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3312e9f9-36d0-42a8-9de2-5b88bb3da415-kube-api-access-56tpn" (OuterVolumeSpecName: "kube-api-access-56tpn") pod "3312e9f9-36d0-42a8-9de2-5b88bb3da415" (UID: "3312e9f9-36d0-42a8-9de2-5b88bb3da415"). InnerVolumeSpecName "kube-api-access-56tpn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jun 10 14:08:52 functional-922000 kubelet[7176]: I0610 14:08:52.017058    7176 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-56tpn\" (UniqueName: \"kubernetes.io/projected/3312e9f9-36d0-42a8-9de2-5b88bb3da415-kube-api-access-56tpn\") on node \"functional-922000\" DevicePath \"\""
	Jun 10 14:08:52 functional-922000 kubelet[7176]: I0610 14:08:52.017096    7176 reconciler_common.go:300] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/3312e9f9-36d0-42a8-9de2-5b88bb3da415-test-volume\") on node \"functional-922000\" DevicePath \"\""
	Jun 10 14:08:52 functional-922000 kubelet[7176]: I0610 14:08:52.711350    7176 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30fc1d5b1924cd1016c6930769f1b722335483bedab844ef360788e080156de2"
	Jun 10 14:08:53 functional-922000 kubelet[7176]: I0610 14:08:53.785723    7176 scope.go:115] "RemoveContainer" containerID="2960e601aca58a651e0737537220701cdfa7f2f5aff9948f5e69febd07fb2eb0"
	Jun 10 14:08:54 functional-922000 kubelet[7176]: I0610 14:08:54.729598    7176 scope.go:115] "RemoveContainer" containerID="2960e601aca58a651e0737537220701cdfa7f2f5aff9948f5e69febd07fb2eb0"
	Jun 10 14:08:54 functional-922000 kubelet[7176]: I0610 14:08:54.730219    7176 scope.go:115] "RemoveContainer" containerID="09b5d791f3e61fc2287f795d516de7adcf6a6219e7dea72ef376f9165f616330"
	Jun 10 14:08:54 functional-922000 kubelet[7176]: E0610 14:08:54.730577    7176 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-7b684b55f9-tdk7f_default(684cdb84-734b-4dca-87c1-6c5a68e88b2d)\"" pod="default/hello-node-7b684b55f9-tdk7f" podUID=684cdb84-734b-4dca-87c1-6c5a68e88b2d
	Jun 10 14:08:59 functional-922000 kubelet[7176]: I0610 14:08:59.787635    7176 scope.go:115] "RemoveContainer" containerID="981ece83357f9ab2a457ed44e59a65c4f413e82ae86f1e0f79fc5fec22fe86f9"
	Jun 10 14:08:59 functional-922000 kubelet[7176]: E0610 14:08:59.788672    7176 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-58d66798bb-gr9nx_default(7f381893-b5d6-4dde-b2f2-7a091639bf9f)\"" pod="default/hello-node-connect-58d66798bb-gr9nx" podUID=7f381893-b5d6-4dde-b2f2-7a091639bf9f
	
	* 
	* ==> storage-provisioner [367cd1fbac13] <==
	* I0610 14:07:53.378676       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 14:07:53.396614       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 14:07:53.396631       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 14:08:10.801643       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 14:08:10.802294       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-922000_2dc622ba-0868-496f-8d7b-9af8cd7845cc!
	I0610 14:08:10.803620       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f1b0cad1-a684-460e-8989-391ecfee6921", APIVersion:"v1", ResourceVersion:"620", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-922000_2dc622ba-0868-496f-8d7b-9af8cd7845cc became leader
	I0610 14:08:10.903441       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-922000_2dc622ba-0868-496f-8d7b-9af8cd7845cc!
	I0610 14:08:19.614519       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0610 14:08:19.615212       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"d96559c5-deca-4d44-8403-15ec1529d95a", APIVersion:"v1", ResourceVersion:"646", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0610 14:08:19.614581       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    6c23bee1-3980-4a87-bf0a-6ea4fcc04a6d 391 0 2023-06-10 14:06:37 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2023-06-10 14:06:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-d96559c5-deca-4d44-8403-15ec1529d95a &PersistentVolumeClaim{ObjectMeta:{myclaim  default  d96559c5-deca-4d44-8403-15ec1529d95a 646 0 2023-06-10 14:08:19 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2023-06-10 14:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2023-06-10 14:08:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0610 14:08:19.615382       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-d96559c5-deca-4d44-8403-15ec1529d95a" provisioned
	I0610 14:08:19.615428       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0610 14:08:19.615507       1 volume_store.go:212] Trying to save persistentvolume "pvc-d96559c5-deca-4d44-8403-15ec1529d95a"
	I0610 14:08:19.620299       1 volume_store.go:219] persistentvolume "pvc-d96559c5-deca-4d44-8403-15ec1529d95a" saved
	I0610 14:08:19.620769       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"d96559c5-deca-4d44-8403-15ec1529d95a", APIVersion:"v1", ResourceVersion:"646", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-d96559c5-deca-4d44-8403-15ec1529d95a
	
	* 
	* ==> storage-provisioner [e8b4e63066a8] <==
	* I0610 14:07:10.321337       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 14:07:10.326262       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 14:07:10.326316       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 14:07:10.328850       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 14:07:10.328921       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-922000_f33334fe-4ddd-4ed3-a423-d26828f6be8b!
	I0610 14:07:10.329264       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f1b0cad1-a684-460e-8989-391ecfee6921", APIVersion:"v1", ResourceVersion:"498", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-922000_f33334fe-4ddd-4ed3-a423-d26828f6be8b became leader
	I0610 14:07:10.429611       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-922000_f33334fe-4ddd-4ed3-a423-d26828f6be8b!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-922000 -n functional-922000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-922000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-922000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-922000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-922000/192.168.105.4
	Start Time:       Sat, 10 Jun 2023 07:08:46 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://d948b228cad44b29db4c3f34446304c66efa58d5a9ec21e523314d6682606312
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 10 Jun 2023 07:08:49 -0700
	      Finished:     Sat, 10 Jun 2023 07:08:49 -0700
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-56tpn (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-56tpn:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  22s   default-scheduler  Successfully assigned default/busybox-mount to functional-922000
	  Normal  Pulling    21s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     19s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.555165884s (2.555172133s including waiting)
	  Normal  Created    19s   kubelet            Created container mount-munger
	  Normal  Started    19s   kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (43.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:259: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 image ls --format json --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-922000 image ls --format json --alsologtostderr:
[]
functional_test.go:267: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-922000 image ls --format json --alsologtostderr:
I0610 07:09:23.905778    2068 out.go:296] Setting OutFile to fd 1 ...
I0610 07:09:23.905937    2068 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 07:09:23.905941    2068 out.go:309] Setting ErrFile to fd 2...
I0610 07:09:23.905944    2068 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 07:09:23.906024    2068 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
I0610 07:09:23.906458    2068 config.go:182] Loaded profile config "functional-922000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0610 07:09:23.906516    2068 config.go:182] Loaded profile config "functional-922000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
W0610 07:09:23.906776    2068 cache_images.go:695] error getting status for functional-922000: state: connect: dial unix /Users/jenkins/minikube-integration/15074-894/.minikube/machines/functional-922000/monitor: connect: connection refused
functional_test.go:273: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.05s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-734000
image_test.go:105: failed to pass build-args with args: "out/minikube-darwin-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-734000" : 
-- stdout --
	Sending build context to Docker daemon  2.048kB
	Step 1/5 : FROM gcr.io/google-containers/alpine-with-bash:1.0
	 ---> 822c13824dc2
	Step 2/5 : ARG ENV_A
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in b68c3936f753
	Removing intermediate container b68c3936f753
	 ---> 15614d048831
	Step 3/5 : ARG ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in 2831e1cfd05d
	Removing intermediate container 2831e1cfd05d
	 ---> 6eeed0ec8b12
	Step 4/5 : RUN echo "test-build-arg" $ENV_A $ENV_B
	 ---> [Warning] The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
	 ---> Running in c0bb4b0abf18
	exec /bin/sh: exec format error
	

                                                
                                                
-- /stdout --
** stderr ** 
	DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
	            Install the buildx component to build images with BuildKit:
	            https://docs.docker.com/go/buildx/
	
	The command '/bin/sh -c echo "test-build-arg" $ENV_A $ENV_B' returned a non-zero code: 1

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-734000 -n image-734000
helpers_test.go:244: <<< TestImageBuild/serial/BuildWithBuildArg FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestImageBuild/serial/BuildWithBuildArg]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p image-734000 logs -n 25
helpers_test.go:252: TestImageBuild/serial/BuildWithBuildArg logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                           Args                           |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image          | functional-922000 image ls                               | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	| update-context | functional-922000                                        | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| image          | functional-922000 image save --daemon                    | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-922000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| update-context | functional-922000                                        | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	|                | update-context                                           |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                   |                   |         |         |                     |                     |
	| image          | functional-922000                                        | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT |                     |
	|                | image ls --format short                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-922000 image load --daemon                    | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-922000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-922000 image ls                               | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	| image          | functional-922000                                        | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	|                | image ls --format yaml                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-922000 image save                             | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-922000 |                   |         |         |                     |                     |
	|                | /Users/jenkins/workspace/addon-resizer-save.tar          |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-922000 image rm                               | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-922000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-922000 image ls                               | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	| image          | functional-922000 image load                             | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	|                | /Users/jenkins/workspace/addon-resizer-save.tar          |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-922000 image ls                               | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	| image          | functional-922000 image save --daemon                    | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	|                | gcr.io/google-containers/addon-resizer:functional-922000 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-922000                                        | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	|                | image ls --format short                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-922000                                        | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT |                     |
	|                | image ls --format yaml                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-922000                                        | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	|                | image ls --format json                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| ssh            | functional-922000 ssh pgrep                              | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT |                     |
	|                | buildkitd                                                |                   |         |         |                     |                     |
	| image          | functional-922000                                        | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	|                | image ls --format table                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-922000 image build -t                         | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	|                | localhost/my-image:functional-922000                     |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                         |                   |         |         |                     |                     |
	| image          | functional-922000 image ls                               | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	| delete         | -p functional-922000                                     | functional-922000 | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	| start          | -p image-734000 --driver=qemu2                           | image-734000      | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	|                |                                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                                      | image-734000      | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	|                | ./testdata/image-build/test-normal                       |                   |         |         |                     |                     |
	|                | -p image-734000                                          |                   |         |         |                     |                     |
	| image          | build -t aaa:latest                                      | image-734000      | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	|                | --build-opt=build-arg=ENV_A=test_env_str                 |                   |         |         |                     |                     |
	|                | --build-opt=no-cache                                     |                   |         |         |                     |                     |
	|                | ./testdata/image-build/test-arg -p                       |                   |         |         |                     |                     |
	|                | image-734000                                             |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 07:09:26
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 07:09:26.881520    2089 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:09:26.881635    2089 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:09:26.881637    2089 out.go:309] Setting ErrFile to fd 2...
	I0610 07:09:26.881638    2089 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:09:26.881713    2089 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:09:26.883021    2089 out.go:303] Setting JSON to false
	I0610 07:09:26.901501    2089 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":536,"bootTime":1686405630,"procs":423,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:09:26.901591    2089 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:09:26.906022    2089 out.go:177] * [image-734000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:09:26.913073    2089 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:09:26.913097    2089 notify.go:220] Checking for updates...
	I0610 07:09:26.917014    2089 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:09:26.920001    2089 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:09:26.923061    2089 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:09:26.926026    2089 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:09:26.929001    2089 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:09:26.932141    2089 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:09:26.935996    2089 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 07:09:26.943022    2089 start.go:297] selected driver: qemu2
	I0610 07:09:26.943025    2089 start.go:875] validating driver "qemu2" against <nil>
	I0610 07:09:26.943030    2089 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:09:26.943076    2089 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 07:09:26.946015    2089 out.go:177] * Automatically selected the socket_vmnet network
	I0610 07:09:26.951133    2089 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0610 07:09:26.951213    2089 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 07:09:26.951225    2089 cni.go:84] Creating CNI manager for ""
	I0610 07:09:26.951230    2089 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 07:09:26.951233    2089 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 07:09:26.951240    2089 start_flags.go:319] config:
	{Name:image-734000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:image-734000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:09:26.951363    2089 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:09:26.958024    2089 out.go:177] * Starting control plane node image-734000 in cluster image-734000
	I0610 07:09:26.962044    2089 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:09:26.962067    2089 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 07:09:26.962076    2089 cache.go:57] Caching tarball of preloaded images
	I0610 07:09:26.962134    2089 preload.go:174] Found /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 07:09:26.962138    2089 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 07:09:26.962321    2089 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/image-734000/config.json ...
	I0610 07:09:26.962331    2089 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/image-734000/config.json: {Name:mkd2faefc49b52652e9c0cb6e02f0cb55cdc10aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:09:26.962528    2089 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:09:26.962535    2089 start.go:364] acquiring machines lock for image-734000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:09:26.962562    2089 start.go:368] acquired machines lock for "image-734000" in 23.75µs
	I0610 07:09:26.962571    2089 start.go:93] Provisioning new machine with config: &{Name:image-734000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.2 ClusterName:image-734000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:09:26.962592    2089 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:09:26.970069    2089 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0610 07:09:26.992602    2089 start.go:159] libmachine.API.Create for "image-734000" (driver="qemu2")
	I0610 07:09:26.992623    2089 client.go:168] LocalClient.Create starting
	I0610 07:09:26.992694    2089 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:09:26.992711    2089 main.go:141] libmachine: Decoding PEM data...
	I0610 07:09:26.992719    2089 main.go:141] libmachine: Parsing certificate...
	I0610 07:09:26.992764    2089 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:09:26.992776    2089 main.go:141] libmachine: Decoding PEM data...
	I0610 07:09:26.992781    2089 main.go:141] libmachine: Parsing certificate...
	I0610 07:09:26.993067    2089 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:09:27.246723    2089 main.go:141] libmachine: Creating SSH key...
	I0610 07:09:27.317230    2089 main.go:141] libmachine: Creating Disk image...
	I0610 07:09:27.317235    2089 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:09:27.317390    2089 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/image-734000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/image-734000/disk.qcow2
	I0610 07:09:27.330814    2089 main.go:141] libmachine: STDOUT: 
	I0610 07:09:27.330825    2089 main.go:141] libmachine: STDERR: 
	I0610 07:09:27.330873    2089 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/image-734000/disk.qcow2 +20000M
	I0610 07:09:27.337946    2089 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:09:27.337955    2089 main.go:141] libmachine: STDERR: 
	I0610 07:09:27.337967    2089 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/image-734000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/image-734000/disk.qcow2
	I0610 07:09:27.337972    2089 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:09:27.338002    2089 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/image-734000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/image-734000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/image-734000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:c1:50:46:e2:b4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/image-734000/disk.qcow2
	I0610 07:09:27.373242    2089 main.go:141] libmachine: STDOUT: 
	I0610 07:09:27.373265    2089 main.go:141] libmachine: STDERR: 
	I0610 07:09:27.373268    2089 main.go:141] libmachine: Attempt 0
	I0610 07:09:27.373282    2089 main.go:141] libmachine: Searching for 66:c1:50:46:e2:b4 in /var/db/dhcpd_leases ...
	I0610 07:09:27.373505    2089 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0610 07:09:27.373525    2089 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:d6:1a:3c:8a:49:10 ID:1,d6:1a:3c:8a:49:10 Lease:0x6485d4cd}
	I0610 07:09:27.373534    2089 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:de:12:f2:ec:9b:79 ID:1,de:12:f2:ec:9b:79 Lease:0x64848340}
	I0610 07:09:27.373538    2089 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:e2:48:cb:84:48:b2 ID:1,e2:48:cb:84:48:b2 Lease:0x6485d459}
	I0610 07:09:29.375619    2089 main.go:141] libmachine: Attempt 1
	I0610 07:09:29.375663    2089 main.go:141] libmachine: Searching for 66:c1:50:46:e2:b4 in /var/db/dhcpd_leases ...
	I0610 07:09:29.376097    2089 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0610 07:09:29.376140    2089 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:d6:1a:3c:8a:49:10 ID:1,d6:1a:3c:8a:49:10 Lease:0x6485d4cd}
	I0610 07:09:29.376168    2089 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:de:12:f2:ec:9b:79 ID:1,de:12:f2:ec:9b:79 Lease:0x64848340}
	I0610 07:09:29.376214    2089 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:e2:48:cb:84:48:b2 ID:1,e2:48:cb:84:48:b2 Lease:0x6485d459}
	I0610 07:09:31.378308    2089 main.go:141] libmachine: Attempt 2
	I0610 07:09:31.378326    2089 main.go:141] libmachine: Searching for 66:c1:50:46:e2:b4 in /var/db/dhcpd_leases ...
	I0610 07:09:31.378445    2089 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0610 07:09:31.378456    2089 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:d6:1a:3c:8a:49:10 ID:1,d6:1a:3c:8a:49:10 Lease:0x6485d4cd}
	I0610 07:09:31.378461    2089 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:de:12:f2:ec:9b:79 ID:1,de:12:f2:ec:9b:79 Lease:0x64848340}
	I0610 07:09:31.378465    2089 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:e2:48:cb:84:48:b2 ID:1,e2:48:cb:84:48:b2 Lease:0x6485d459}
	I0610 07:09:33.380427    2089 main.go:141] libmachine: Attempt 3
	I0610 07:09:33.380431    2089 main.go:141] libmachine: Searching for 66:c1:50:46:e2:b4 in /var/db/dhcpd_leases ...
	I0610 07:09:33.380461    2089 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0610 07:09:33.380466    2089 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:d6:1a:3c:8a:49:10 ID:1,d6:1a:3c:8a:49:10 Lease:0x6485d4cd}
	I0610 07:09:33.380470    2089 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:de:12:f2:ec:9b:79 ID:1,de:12:f2:ec:9b:79 Lease:0x64848340}
	I0610 07:09:33.380475    2089 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:e2:48:cb:84:48:b2 ID:1,e2:48:cb:84:48:b2 Lease:0x6485d459}
	I0610 07:09:35.382429    2089 main.go:141] libmachine: Attempt 4
	I0610 07:09:35.382433    2089 main.go:141] libmachine: Searching for 66:c1:50:46:e2:b4 in /var/db/dhcpd_leases ...
	I0610 07:09:35.382491    2089 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0610 07:09:35.382497    2089 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:d6:1a:3c:8a:49:10 ID:1,d6:1a:3c:8a:49:10 Lease:0x6485d4cd}
	I0610 07:09:35.382501    2089 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:de:12:f2:ec:9b:79 ID:1,de:12:f2:ec:9b:79 Lease:0x64848340}
	I0610 07:09:35.382505    2089 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:e2:48:cb:84:48:b2 ID:1,e2:48:cb:84:48:b2 Lease:0x6485d459}
	I0610 07:09:37.384527    2089 main.go:141] libmachine: Attempt 5
	I0610 07:09:37.384558    2089 main.go:141] libmachine: Searching for 66:c1:50:46:e2:b4 in /var/db/dhcpd_leases ...
	I0610 07:09:37.384666    2089 main.go:141] libmachine: Found 3 entries in /var/db/dhcpd_leases!
	I0610 07:09:37.384675    2089 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:d6:1a:3c:8a:49:10 ID:1,d6:1a:3c:8a:49:10 Lease:0x6485d4cd}
	I0610 07:09:37.384681    2089 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:de:12:f2:ec:9b:79 ID:1,de:12:f2:ec:9b:79 Lease:0x64848340}
	I0610 07:09:37.384686    2089 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:e2:48:cb:84:48:b2 ID:1,e2:48:cb:84:48:b2 Lease:0x6485d459}
	I0610 07:09:39.386695    2089 main.go:141] libmachine: Attempt 6
	I0610 07:09:39.386710    2089 main.go:141] libmachine: Searching for 66:c1:50:46:e2:b4 in /var/db/dhcpd_leases ...
	I0610 07:09:39.386843    2089 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0610 07:09:39.386853    2089 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:66:c1:50:46:e2:b4 ID:1,66:c1:50:46:e2:b4 Lease:0x6485d5a2}
	I0610 07:09:39.386856    2089 main.go:141] libmachine: Found match: 66:c1:50:46:e2:b4
	I0610 07:09:39.386866    2089 main.go:141] libmachine: IP: 192.168.105.5
	I0610 07:09:39.386870    2089 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.5)...
	I0610 07:09:41.433149    2089 machine.go:88] provisioning docker machine ...
	I0610 07:09:41.433220    2089 buildroot.go:166] provisioning hostname "image-734000"
	I0610 07:09:41.433480    2089 main.go:141] libmachine: Using SSH client type: native
	I0610 07:09:41.434476    2089 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044586d0] 0x10445b130 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0610 07:09:41.434492    2089 main.go:141] libmachine: About to run SSH command:
	sudo hostname image-734000 && echo "image-734000" | sudo tee /etc/hostname
	I0610 07:09:41.518592    2089 main.go:141] libmachine: SSH cmd err, output: <nil>: image-734000
	
	I0610 07:09:41.518699    2089 main.go:141] libmachine: Using SSH client type: native
	I0610 07:09:41.519171    2089 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044586d0] 0x10445b130 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0610 07:09:41.519184    2089 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\simage-734000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 image-734000/g' /etc/hosts;
				else 
					echo '127.0.1.1 image-734000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 07:09:41.585515    2089 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 07:09:41.585528    2089 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15074-894/.minikube CaCertPath:/Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15074-894/.minikube}
	I0610 07:09:41.585542    2089 buildroot.go:174] setting up certificates
	I0610 07:09:41.585549    2089 provision.go:83] configureAuth start
	I0610 07:09:41.585553    2089 provision.go:138] copyHostCerts
	I0610 07:09:41.585681    2089 exec_runner.go:144] found /Users/jenkins/minikube-integration/15074-894/.minikube/ca.pem, removing ...
	I0610 07:09:41.585686    2089 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15074-894/.minikube/ca.pem
	I0610 07:09:41.585848    2089 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15074-894/.minikube/ca.pem (1078 bytes)
	I0610 07:09:41.586158    2089 exec_runner.go:144] found /Users/jenkins/minikube-integration/15074-894/.minikube/cert.pem, removing ...
	I0610 07:09:41.586161    2089 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15074-894/.minikube/cert.pem
	I0610 07:09:41.586219    2089 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15074-894/.minikube/cert.pem (1123 bytes)
	I0610 07:09:41.586373    2089 exec_runner.go:144] found /Users/jenkins/minikube-integration/15074-894/.minikube/key.pem, removing ...
	I0610 07:09:41.586375    2089 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15074-894/.minikube/key.pem
	I0610 07:09:41.586428    2089 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15074-894/.minikube/key.pem (1679 bytes)
	I0610 07:09:41.586559    2089 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15074-894/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca-key.pem org=jenkins.image-734000 san=[192.168.105.5 192.168.105.5 localhost 127.0.0.1 minikube image-734000]
	I0610 07:09:41.673710    2089 provision.go:172] copyRemoteCerts
	I0610 07:09:41.673751    2089 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 07:09:41.673757    2089 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/image-734000/id_rsa Username:docker}
	I0610 07:09:41.704242    2089 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0610 07:09:41.711769    2089 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0610 07:09:41.719155    2089 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0610 07:09:41.726263    2089 provision.go:86] duration metric: configureAuth took 140.715542ms
	I0610 07:09:41.726268    2089 buildroot.go:189] setting minikube options for container-runtime
	I0610 07:09:41.726361    2089 config.go:182] Loaded profile config "image-734000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:09:41.726393    2089 main.go:141] libmachine: Using SSH client type: native
	I0610 07:09:41.726596    2089 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044586d0] 0x10445b130 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0610 07:09:41.726599    2089 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 07:09:41.779209    2089 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 07:09:41.779213    2089 buildroot.go:70] root file system type: tmpfs
	I0610 07:09:41.779260    2089 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 07:09:41.779318    2089 main.go:141] libmachine: Using SSH client type: native
	I0610 07:09:41.779541    2089 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044586d0] 0x10445b130 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0610 07:09:41.779573    2089 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 07:09:41.837522    2089 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 07:09:41.837560    2089 main.go:141] libmachine: Using SSH client type: native
	I0610 07:09:41.837809    2089 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044586d0] 0x10445b130 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0610 07:09:41.837816    2089 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 07:09:42.176942    2089 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 07:09:42.176950    2089 machine.go:91] provisioned docker machine in 743.812ms
	I0610 07:09:42.176954    2089 client.go:171] LocalClient.Create took 15.184842334s
	I0610 07:09:42.176971    2089 start.go:167] duration metric: libmachine.API.Create for "image-734000" took 15.184887417s
	I0610 07:09:42.176974    2089 start.go:300] post-start starting for "image-734000" (driver="qemu2")
	I0610 07:09:42.176976    2089 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 07:09:42.177039    2089 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 07:09:42.177047    2089 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/image-734000/id_rsa Username:docker}
	I0610 07:09:42.208846    2089 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 07:09:42.210840    2089 info.go:137] Remote host: Buildroot 2021.02.12
	I0610 07:09:42.210846    2089 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15074-894/.minikube/addons for local assets ...
	I0610 07:09:42.210905    2089 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15074-894/.minikube/files for local assets ...
	I0610 07:09:42.211013    2089 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15074-894/.minikube/files/etc/ssl/certs/13362.pem -> 13362.pem in /etc/ssl/certs
	I0610 07:09:42.211125    2089 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 07:09:42.213997    2089 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/files/etc/ssl/certs/13362.pem --> /etc/ssl/certs/13362.pem (1708 bytes)
	I0610 07:09:42.224239    2089 start.go:303] post-start completed in 47.25825ms
	I0610 07:09:42.224661    2089 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/image-734000/config.json ...
	I0610 07:09:42.224822    2089 start.go:128] duration metric: createHost completed in 15.262742791s
	I0610 07:09:42.224852    2089 main.go:141] libmachine: Using SSH client type: native
	I0610 07:09:42.225080    2089 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1044586d0] 0x10445b130 <nil>  [] 0s} 192.168.105.5 22 <nil> <nil>}
	I0610 07:09:42.225083    2089 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 07:09:42.278545    2089 main.go:141] libmachine: SSH cmd err, output: <nil>: 1686406182.523733919
	
	I0610 07:09:42.278551    2089 fix.go:207] guest clock: 1686406182.523733919
	I0610 07:09:42.278555    2089 fix.go:220] Guest: 2023-06-10 07:09:42.523733919 -0700 PDT Remote: 2023-06-10 07:09:42.224825 -0700 PDT m=+15.365393334 (delta=298.908919ms)
	I0610 07:09:42.278565    2089 fix.go:191] guest clock delta is within tolerance: 298.908919ms
	I0610 07:09:42.278567    2089 start.go:83] releasing machines lock for "image-734000", held for 15.316520625s
	I0610 07:09:42.278827    2089 ssh_runner.go:195] Run: cat /version.json
	I0610 07:09:42.278833    2089 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/image-734000/id_rsa Username:docker}
	I0610 07:09:42.278862    2089 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 07:09:42.278900    2089 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/image-734000/id_rsa Username:docker}
	I0610 07:09:42.353568    2089 ssh_runner.go:195] Run: systemctl --version
	I0610 07:09:42.355680    2089 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 07:09:42.357855    2089 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 07:09:42.357886    2089 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0610 07:09:42.362905    2089 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 07:09:42.362910    2089 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:09:42.362984    2089 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 07:09:42.370764    2089 docker.go:633] Got preloaded images: 
	I0610 07:09:42.370774    2089 docker.go:639] registry.k8s.io/kube-apiserver:v1.27.2 wasn't preloaded
	I0610 07:09:42.370812    2089 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 07:09:42.373600    2089 ssh_runner.go:195] Run: which lz4
	I0610 07:09:42.374967    2089 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 07:09:42.376225    2089 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 07:09:42.376236    2089 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (343635669 bytes)
	I0610 07:09:43.670129    2089 docker.go:597] Took 1.295238 seconds to copy over tarball
	I0610 07:09:43.670175    2089 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 07:09:44.709177    2089 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.039025709s)
	I0610 07:09:44.709186    2089 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 07:09:44.724454    2089 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 07:09:44.727949    2089 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I0610 07:09:44.733541    2089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 07:09:44.805393    2089 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 07:09:45.982568    2089 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.177201167s)
	I0610 07:09:45.982587    2089 start.go:481] detecting cgroup driver to use...
	I0610 07:09:45.982670    2089 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 07:09:45.988020    2089 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0610 07:09:45.991237    2089 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 07:09:45.994413    2089 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 07:09:45.994435    2089 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 07:09:45.997384    2089 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 07:09:46.000595    2089 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 07:09:46.003565    2089 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 07:09:46.006386    2089 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 07:09:46.009196    2089 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 07:09:46.012433    2089 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 07:09:46.015878    2089 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 07:09:46.018997    2089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 07:09:46.091887    2089 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 07:09:46.101326    2089 start.go:481] detecting cgroup driver to use...
	I0610 07:09:46.101383    2089 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 07:09:46.107209    2089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 07:09:46.111947    2089 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 07:09:46.117488    2089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 07:09:46.122237    2089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 07:09:46.126556    2089 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 07:09:46.168747    2089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 07:09:46.174273    2089 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 07:09:46.179884    2089 ssh_runner.go:195] Run: which cri-dockerd
	I0610 07:09:46.181256    2089 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 07:09:46.183925    2089 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 07:09:46.188844    2089 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 07:09:46.265025    2089 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 07:09:46.341343    2089 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 07:09:46.341352    2089 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0610 07:09:46.346497    2089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 07:09:46.422954    2089 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 07:09:47.595209    2089 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.172282333s)
	I0610 07:09:47.595268    2089 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 07:09:47.671842    2089 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0610 07:09:47.759048    2089 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0610 07:09:47.835651    2089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 07:09:47.913979    2089 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0610 07:09:47.920753    2089 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 07:09:47.994722    2089 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I0610 07:09:48.017640    2089 start.go:528] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0610 07:09:48.017718    2089 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0610 07:09:48.020451    2089 start.go:549] Will wait 60s for crictl version
	I0610 07:09:48.020494    2089 ssh_runner.go:195] Run: which crictl
	I0610 07:09:48.021971    2089 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0610 07:09:48.039174    2089 start.go:565] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.2
	RuntimeApiVersion:  v1alpha2
	I0610 07:09:48.039234    2089 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 07:09:48.050627    2089 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 07:09:48.066036    2089 out.go:204] * Preparing Kubernetes v1.27.2 on Docker 24.0.2 ...
	I0610 07:09:48.066191    2089 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0610 07:09:48.067525    2089 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 07:09:48.071470    2089 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:09:48.071512    2089 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 07:09:48.081395    2089 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 07:09:48.081400    2089 docker.go:563] Images already preloaded, skipping extraction
	I0610 07:09:48.081442    2089 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 07:09:48.087087    2089 docker.go:633] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.27.2
	registry.k8s.io/kube-controller-manager:v1.27.2
	registry.k8s.io/kube-scheduler:v1.27.2
	registry.k8s.io/kube-proxy:v1.27.2
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/etcd:3.5.7-0
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0610 07:09:48.087103    2089 cache_images.go:84] Images are preloaded, skipping loading
	I0610 07:09:48.087155    2089 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 07:09:48.097674    2089 cni.go:84] Creating CNI manager for ""
	I0610 07:09:48.097680    2089 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 07:09:48.097691    2089 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0610 07:09:48.097698    2089 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.5 APIServerPort:8443 KubernetesVersion:v1.27.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:image-734000 NodeName:image-734000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0610 07:09:48.097759    2089 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "image-734000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.27.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 07:09:48.097788    2089 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.27.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=image-734000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.27.2 ClusterName:image-734000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0610 07:09:48.097843    2089 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.27.2
	I0610 07:09:48.101015    2089 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 07:09:48.101052    2089 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 07:09:48.104139    2089 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I0610 07:09:48.109375    2089 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0610 07:09:48.114371    2089 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0610 07:09:48.119913    2089 ssh_runner.go:195] Run: grep 192.168.105.5	control-plane.minikube.internal$ /etc/hosts
	I0610 07:09:48.121249    2089 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 07:09:48.124926    2089 certs.go:56] Setting up /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/image-734000 for IP: 192.168.105.5
	I0610 07:09:48.124933    2089 certs.go:190] acquiring lock for shared ca certs: {Name:mk2bb46910d2e2fc8cdcab49d7502062bd19dc79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:09:48.125063    2089 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15074-894/.minikube/ca.key
	I0610 07:09:48.125789    2089 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15074-894/.minikube/proxy-client-ca.key
	I0610 07:09:48.125818    2089 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/image-734000/client.key
	I0610 07:09:48.125823    2089 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/image-734000/client.crt with IP's: []
	I0610 07:09:48.194730    2089 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/image-734000/client.crt ...
	I0610 07:09:48.194734    2089 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/image-734000/client.crt: {Name:mkb9602be7c19ad5fff1275ca8f0254bfcafd23f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:09:48.194939    2089 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/image-734000/client.key ...
	I0610 07:09:48.194941    2089 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/image-734000/client.key: {Name:mk7901d7aa16e6218bfb15dac547c22255e44c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:09:48.195052    2089 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/image-734000/apiserver.key.e69b33ca
	I0610 07:09:48.195057    2089 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/image-734000/apiserver.crt.e69b33ca with IP's: [192.168.105.5 10.96.0.1 127.0.0.1 10.0.0.1]
	I0610 07:09:48.235220    2089 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/image-734000/apiserver.crt.e69b33ca ...
	I0610 07:09:48.235222    2089 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/image-734000/apiserver.crt.e69b33ca: {Name:mkc3ff7b51202b0f7b47c573d8d20e877f4aa1b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:09:48.235348    2089 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/image-734000/apiserver.key.e69b33ca ...
	I0610 07:09:48.235349    2089 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/image-734000/apiserver.key.e69b33ca: {Name:mk7cafae7320f14ec391f139dcbdab0ac0f23577 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:09:48.235451    2089 certs.go:337] copying /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/image-734000/apiserver.crt.e69b33ca -> /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/image-734000/apiserver.crt
	I0610 07:09:48.235531    2089 certs.go:341] copying /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/image-734000/apiserver.key.e69b33ca -> /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/image-734000/apiserver.key
	I0610 07:09:48.235618    2089 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/image-734000/proxy-client.key
	I0610 07:09:48.235622    2089 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/image-734000/proxy-client.crt with IP's: []
	I0610 07:09:48.268747    2089 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/image-734000/proxy-client.crt ...
	I0610 07:09:48.268750    2089 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/image-734000/proxy-client.crt: {Name:mk83248d054dab6e24b80665013a2df557d08140 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:09:48.268881    2089 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/image-734000/proxy-client.key ...
	I0610 07:09:48.268883    2089 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/image-734000/proxy-client.key: {Name:mk76600035f511df18628090de0335ec5cd1a7c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:09:48.269118    2089 certs.go:437] found cert: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/Users/jenkins/minikube-integration/15074-894/.minikube/certs/1336.pem (1338 bytes)
	W0610 07:09:48.269302    2089 certs.go:433] ignoring /Users/jenkins/minikube-integration/15074-894/.minikube/certs/Users/jenkins/minikube-integration/15074-894/.minikube/certs/1336_empty.pem, impossibly tiny 0 bytes
	I0610 07:09:48.269308    2089 certs.go:437] found cert: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca-key.pem (1675 bytes)
	I0610 07:09:48.269331    2089 certs.go:437] found cert: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem (1078 bytes)
	I0610 07:09:48.269349    2089 certs.go:437] found cert: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem (1123 bytes)
	I0610 07:09:48.269366    2089 certs.go:437] found cert: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/Users/jenkins/minikube-integration/15074-894/.minikube/certs/key.pem (1679 bytes)
	I0610 07:09:48.269415    2089 certs.go:437] found cert: /Users/jenkins/minikube-integration/15074-894/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15074-894/.minikube/files/etc/ssl/certs/13362.pem (1708 bytes)
	I0610 07:09:48.269710    2089 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/image-734000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0610 07:09:48.276682    2089 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/image-734000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0610 07:09:48.283749    2089 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/image-734000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 07:09:48.291071    2089 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/image-734000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0610 07:09:48.297753    2089 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 07:09:48.304429    2089 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0610 07:09:48.311717    2089 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 07:09:48.319037    2089 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 07:09:48.325885    2089 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/files/etc/ssl/certs/13362.pem --> /usr/share/ca-certificates/13362.pem (1708 bytes)
	I0610 07:09:48.332476    2089 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 07:09:48.339525    2089 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/certs/1336.pem --> /usr/share/ca-certificates/1336.pem (1338 bytes)
	I0610 07:09:48.346273    2089 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 07:09:48.351280    2089 ssh_runner.go:195] Run: openssl version
	I0610 07:09:48.353370    2089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13362.pem && ln -fs /usr/share/ca-certificates/13362.pem /etc/ssl/certs/13362.pem"
	I0610 07:09:48.356738    2089 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13362.pem
	I0610 07:09:48.358214    2089 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 10 14:05 /usr/share/ca-certificates/13362.pem
	I0610 07:09:48.358235    2089 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13362.pem
	I0610 07:09:48.359986    2089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13362.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 07:09:48.363216    2089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 07:09:48.366071    2089 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 07:09:48.367614    2089 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 10 14:05 /usr/share/ca-certificates/minikubeCA.pem
	I0610 07:09:48.367634    2089 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 07:09:48.369492    2089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 07:09:48.372614    2089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1336.pem && ln -fs /usr/share/ca-certificates/1336.pem /etc/ssl/certs/1336.pem"
	I0610 07:09:48.375811    2089 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1336.pem
	I0610 07:09:48.377485    2089 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 10 14:05 /usr/share/ca-certificates/1336.pem
	I0610 07:09:48.377503    2089 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1336.pem
	I0610 07:09:48.379280    2089 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1336.pem /etc/ssl/certs/51391683.0"
	I0610 07:09:48.382192    2089 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0610 07:09:48.383378    2089 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0610 07:09:48.383407    2089 kubeadm.go:404] StartCluster: {Name:image-734000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.27.2 ClusterName:image-734000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:09:48.383469    2089 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 07:09:48.388846    2089 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 07:09:48.392308    2089 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 07:09:48.395527    2089 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 07:09:48.398317    2089 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 07:09:48.398328    2089 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.27.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0610 07:09:48.419352    2089 kubeadm.go:322] [init] Using Kubernetes version: v1.27.2
	I0610 07:09:48.419377    2089 kubeadm.go:322] [preflight] Running pre-flight checks
	I0610 07:09:48.473208    2089 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 07:09:48.473272    2089 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 07:09:48.473322    2089 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 07:09:48.541884    2089 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 07:09:48.550053    2089 out.go:204]   - Generating certificates and keys ...
	I0610 07:09:48.550087    2089 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0610 07:09:48.550121    2089 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0610 07:09:48.672431    2089 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 07:09:48.751978    2089 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0610 07:09:48.875866    2089 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0610 07:09:49.035229    2089 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0610 07:09:49.092642    2089 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0610 07:09:49.092703    2089 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [image-734000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0610 07:09:49.171829    2089 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0610 07:09:49.171907    2089 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [image-734000 localhost] and IPs [192.168.105.5 127.0.0.1 ::1]
	I0610 07:09:49.301038    2089 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 07:09:49.544858    2089 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 07:09:49.594849    2089 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0610 07:09:49.594881    2089 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 07:09:49.719388    2089 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 07:09:49.762192    2089 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 07:09:49.872366    2089 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 07:09:49.946335    2089 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 07:09:49.952550    2089 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 07:09:49.952956    2089 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 07:09:49.952975    2089 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0610 07:09:50.040294    2089 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 07:09:50.047449    2089 out.go:204]   - Booting up control plane ...
	I0610 07:09:50.047526    2089 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 07:09:50.047590    2089 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 07:09:50.047625    2089 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 07:09:50.047674    2089 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 07:09:50.047977    2089 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 07:09:54.052431    2089 kubeadm.go:322] [apiclient] All control plane components are healthy after 4.003497 seconds
	I0610 07:09:54.052573    2089 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 07:09:54.061747    2089 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 07:09:54.574398    2089 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 07:09:54.574500    2089 kubeadm.go:322] [mark-control-plane] Marking the node image-734000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0610 07:09:55.099367    2089 kubeadm.go:322] [bootstrap-token] Using token: mu0s8n.iunfqbjvg2x9d50l
	I0610 07:09:55.103125    2089 out.go:204]   - Configuring RBAC rules ...
	I0610 07:09:55.103288    2089 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 07:09:55.115453    2089 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 07:09:55.121643    2089 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 07:09:55.124009    2089 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 07:09:55.126452    2089 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 07:09:55.131882    2089 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 07:09:55.141488    2089 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 07:09:55.333966    2089 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0610 07:09:55.517782    2089 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0610 07:09:55.518209    2089 kubeadm.go:322] 
	I0610 07:09:55.518240    2089 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0610 07:09:55.518242    2089 kubeadm.go:322] 
	I0610 07:09:55.518278    2089 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0610 07:09:55.518279    2089 kubeadm.go:322] 
	I0610 07:09:55.518290    2089 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0610 07:09:55.518316    2089 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 07:09:55.518347    2089 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 07:09:55.518350    2089 kubeadm.go:322] 
	I0610 07:09:55.518400    2089 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0610 07:09:55.518402    2089 kubeadm.go:322] 
	I0610 07:09:55.518430    2089 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0610 07:09:55.518432    2089 kubeadm.go:322] 
	I0610 07:09:55.518459    2089 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0610 07:09:55.518497    2089 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 07:09:55.518533    2089 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 07:09:55.518535    2089 kubeadm.go:322] 
	I0610 07:09:55.518580    2089 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 07:09:55.518619    2089 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0610 07:09:55.518622    2089 kubeadm.go:322] 
	I0610 07:09:55.518673    2089 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token mu0s8n.iunfqbjvg2x9d50l \
	I0610 07:09:55.518733    2089 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f81669ad7d2f234b34c57c88f17d06eff870ea4064b7e3e4d3b3eb3883ffeaf2 \
	I0610 07:09:55.518747    2089 kubeadm.go:322] 	--control-plane 
	I0610 07:09:55.518750    2089 kubeadm.go:322] 
	I0610 07:09:55.518790    2089 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0610 07:09:55.518795    2089 kubeadm.go:322] 
	I0610 07:09:55.518834    2089 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token mu0s8n.iunfqbjvg2x9d50l \
	I0610 07:09:55.518897    2089 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:f81669ad7d2f234b34c57c88f17d06eff870ea4064b7e3e4d3b3eb3883ffeaf2 
	I0610 07:09:55.518953    2089 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 07:09:55.519049    2089 kubeadm.go:322] W0610 14:09:48.718608    1391 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0610 07:09:55.519140    2089 kubeadm.go:322] W0610 14:09:50.291937    1391 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
	I0610 07:09:55.519149    2089 cni.go:84] Creating CNI manager for ""
	I0610 07:09:55.519156    2089 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 07:09:55.526229    2089 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0610 07:09:55.529346    2089 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0610 07:09:55.532358    2089 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0610 07:09:55.537148    2089 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 07:09:55.537203    2089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:09:55.537209    2089 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.27.2/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=3273891fc7fc0f39c65075197baa2d52fc489f6f minikube.k8s.io/name=image-734000 minikube.k8s.io/updated_at=2023_06_10T07_09_55_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:09:55.579348    2089 kubeadm.go:1076] duration metric: took 42.179792ms to wait for elevateKubeSystemPrivileges.
	I0610 07:09:55.602671    2089 ops.go:34] apiserver oom_adj: -16
	I0610 07:09:55.602679    2089 kubeadm.go:406] StartCluster complete in 7.219517375s
	I0610 07:09:55.602690    2089 settings.go:142] acquiring lock: {Name:mk4cd069708b06d9de03f9b5393c32ff96cdd016 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:09:55.602770    2089 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:09:55.603107    2089 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/kubeconfig: {Name:mkac2e0f9c3956b550c91557119bdbcf28863bf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:09:55.603321    2089 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0610 07:09:55.603345    2089 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0610 07:09:55.603411    2089 addons.go:66] Setting storage-provisioner=true in profile "image-734000"
	I0610 07:09:55.603422    2089 addons.go:228] Setting addon storage-provisioner=true in "image-734000"
	I0610 07:09:55.603452    2089 host.go:66] Checking if "image-734000" exists ...
	I0610 07:09:55.603436    2089 addons.go:66] Setting default-storageclass=true in profile "image-734000"
	I0610 07:09:55.603503    2089 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "image-734000"
	I0610 07:09:55.603577    2089 config.go:182] Loaded profile config "image-734000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	W0610 07:09:55.603753    2089 host.go:54] host status for "image-734000" returned error: state: connect: dial unix /Users/jenkins/minikube-integration/15074-894/.minikube/machines/image-734000/monitor: connect: connection refused
	W0610 07:09:55.603761    2089 addons_storage_classes.go:55] "image-734000" is not running, writing default-storageclass=true to disk and skipping enablement
	I0610 07:09:55.603763    2089 addons.go:228] Setting addon default-storageclass=true in "image-734000"
	I0610 07:09:55.603769    2089 host.go:66] Checking if "image-734000" exists ...
	I0610 07:09:55.605795    2089 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 07:09:55.609307    2089 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 07:09:55.609312    2089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 07:09:55.609320    2089 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/image-734000/id_rsa Username:docker}
	I0610 07:09:55.610107    2089 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 07:09:55.610110    2089 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 07:09:55.610113    2089 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/image-734000/id_rsa Username:docker}
	I0610 07:09:55.649268    2089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 07:09:55.657410    2089 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.27.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0610 07:09:55.660740    2089 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.27.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 07:09:56.069652    2089 start.go:916] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0610 07:09:56.125046    2089 kapi.go:248] "coredns" deployment in "kube-system" namespace and "image-734000" context rescaled to 1 replicas
	I0610 07:09:56.125058    2089 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:09:56.132148    2089 out.go:177] * Verifying Kubernetes components...
	I0610 07:09:56.135920    2089 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 07:09:56.152104    2089 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0610 07:09:56.148646    2089 api_server.go:52] waiting for apiserver process to appear ...
	I0610 07:09:56.160097    2089 addons.go:499] enable addons completed in 556.777708ms: enabled=[default-storageclass storage-provisioner]
	I0610 07:09:56.160128    2089 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 07:09:56.164207    2089 api_server.go:72] duration metric: took 39.140792ms to wait for apiserver process to appear ...
	I0610 07:09:56.164214    2089 api_server.go:88] waiting for apiserver healthz status ...
	I0610 07:09:56.164218    2089 api_server.go:253] Checking apiserver healthz at https://192.168.105.5:8443/healthz ...
	I0610 07:09:56.167590    2089 api_server.go:279] https://192.168.105.5:8443/healthz returned 200:
	ok
	I0610 07:09:56.168337    2089 api_server.go:141] control plane version: v1.27.2
	I0610 07:09:56.168342    2089 api_server.go:131] duration metric: took 4.126084ms to wait for apiserver health ...
	I0610 07:09:56.168344    2089 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 07:09:56.171074    2089 system_pods.go:59] 5 kube-system pods found
	I0610 07:09:56.171080    2089 system_pods.go:61] "etcd-image-734000" [f3383aaa-2ab2-4edb-981f-bf4402408b91] Pending
	I0610 07:09:56.171082    2089 system_pods.go:61] "kube-apiserver-image-734000" [8089aeda-095a-45b1-916d-95a05d48de11] Pending
	I0610 07:09:56.171084    2089 system_pods.go:61] "kube-controller-manager-image-734000" [615b23f5-529c-4958-be54-82696ddbffcb] Pending
	I0610 07:09:56.171085    2089 system_pods.go:61] "kube-scheduler-image-734000" [92b08b65-647b-4b0b-887a-56ec0d8f4f3e] Pending
	I0610 07:09:56.171087    2089 system_pods.go:61] "storage-provisioner" [14a07fbb-ca68-490f-84ed-887435056ef7] Pending
	I0610 07:09:56.171088    2089 system_pods.go:74] duration metric: took 2.74275ms to wait for pod list to return data ...
	I0610 07:09:56.171092    2089 kubeadm.go:581] duration metric: took 46.026292ms to wait for : map[apiserver:true system_pods:true] ...
	I0610 07:09:56.171096    2089 node_conditions.go:102] verifying NodePressure condition ...
	I0610 07:09:56.172400    2089 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0610 07:09:56.172407    2089 node_conditions.go:123] node cpu capacity is 2
	I0610 07:09:56.172411    2089 node_conditions.go:105] duration metric: took 1.314ms to run NodePressure ...
	I0610 07:09:56.172416    2089 start.go:228] waiting for startup goroutines ...
	I0610 07:09:56.172418    2089 start.go:233] waiting for cluster config update ...
	I0610 07:09:56.172422    2089 start.go:242] writing updated cluster config ...
	I0610 07:09:56.172674    2089 ssh_runner.go:195] Run: rm -f paused
	I0610 07:09:56.201020    2089 start.go:573] kubectl: 1.25.9, cluster: 1.27.2 (minor skew: 2)
	I0610 07:09:56.204155    2089 out.go:177] 
	W0610 07:09:56.208112    2089 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.2.
	I0610 07:09:56.212070    2089 out.go:177]   - Want kubectl v1.27.2? Try 'minikube kubectl -- get pods -A'
	I0610 07:09:56.220093    2089 out.go:177] * Done! kubectl is now configured to use "image-734000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sat 2023-06-10 14:09:38 UTC, ends at Sat 2023-06-10 14:09:59 UTC. --
	Jun 10 14:09:51 image-734000 dockerd[1007]: time="2023-06-10T14:09:51.206029798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 14:09:51 image-734000 dockerd[1007]: time="2023-06-10T14:09:51.206264339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 14:09:51 image-734000 dockerd[1007]: time="2023-06-10T14:09:51.206306256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 14:09:51 image-734000 dockerd[1007]: time="2023-06-10T14:09:51.206341673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 14:09:51 image-734000 cri-dockerd[1231]: time="2023-06-10T14:09:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a19e6e8cb3b5b562233bdc729070dc71f29b5831e5b97ff22731669724ba2306/resolv.conf as [nameserver 192.168.105.1]"
	Jun 10 14:09:51 image-734000 cri-dockerd[1231]: time="2023-06-10T14:09:51Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/64e817f15abd516109e6575c19e02614a4c8934eb212938800b3b7002ed2ab0c/resolv.conf as [nameserver 192.168.105.1]"
	Jun 10 14:09:51 image-734000 dockerd[1007]: time="2023-06-10T14:09:51.313974881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 14:09:51 image-734000 dockerd[1007]: time="2023-06-10T14:09:51.314032798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 14:09:51 image-734000 dockerd[1007]: time="2023-06-10T14:09:51.314048048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 14:09:51 image-734000 dockerd[1007]: time="2023-06-10T14:09:51.314059298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 14:09:51 image-734000 dockerd[1007]: time="2023-06-10T14:09:51.321304673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 14:09:51 image-734000 dockerd[1007]: time="2023-06-10T14:09:51.321335131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 14:09:51 image-734000 dockerd[1007]: time="2023-06-10T14:09:51.321356631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 14:09:51 image-734000 dockerd[1007]: time="2023-06-10T14:09:51.321363006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 14:09:58 image-734000 dockerd[1000]: time="2023-06-10T14:09:58.453594259Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Jun 10 14:09:58 image-734000 dockerd[1000]: time="2023-06-10T14:09:58.573517051Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Jun 10 14:09:58 image-734000 dockerd[1000]: time="2023-06-10T14:09:58.587392510Z" level=info msg="Layer sha256:5e5d01bb2a8d3e34816f24ff1a055b5d084e5a5a1919cd77684120916d61c3eb cleaned up"
	Jun 10 14:09:58 image-734000 dockerd[1007]: time="2023-06-10T14:09:58.627477093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 14:09:58 image-734000 dockerd[1007]: time="2023-06-10T14:09:58.627798760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 14:09:58 image-734000 dockerd[1007]: time="2023-06-10T14:09:58.627824593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 14:09:58 image-734000 dockerd[1007]: time="2023-06-10T14:09:58.627835135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 14:09:58 image-734000 dockerd[1007]: time="2023-06-10T14:09:58.759194551Z" level=info msg="shim disconnected" id=c0bb4b0abf18bee2522c06184ba49766b6401e9cf6641f96661256bde0c6431e namespace=moby
	Jun 10 14:09:58 image-734000 dockerd[1007]: time="2023-06-10T14:09:58.759250426Z" level=warning msg="cleaning up after shim disconnected" id=c0bb4b0abf18bee2522c06184ba49766b6401e9cf6641f96661256bde0c6431e namespace=moby
	Jun 10 14:09:58 image-734000 dockerd[1007]: time="2023-06-10T14:09:58.759255135Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 14:09:58 image-734000 dockerd[1000]: time="2023-06-10T14:09:58.759365260Z" level=info msg="ignoring event" container=c0bb4b0abf18bee2522c06184ba49766b6401e9cf6641f96661256bde0c6431e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	dc97bb85ee9b9       305d7ed1dae28       8 seconds ago       Running             kube-scheduler            0                   64e817f15abd5
	9f14a9ff4677d       2ee705380c3c5       8 seconds ago       Running             kube-controller-manager   0                   a19e6e8cb3b5b
	4582a397a544f       72c9df6be7f1b       8 seconds ago       Running             kube-apiserver            0                   d8162d4d8b490
	66e6dfa6e084e       24bc64e911039       8 seconds ago       Running             etcd                      0                   8f4130bf03351
	
	* 
	* ==> describe nodes <==
	* Name:               image-734000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=image-734000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3273891fc7fc0f39c65075197baa2d52fc489f6f
	                    minikube.k8s.io/name=image-734000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_10T07_09_55_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jun 2023 14:09:52 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  image-734000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jun 2023 14:09:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jun 2023 14:09:58 +0000   Sat, 10 Jun 2023 14:09:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jun 2023 14:09:58 +0000   Sat, 10 Jun 2023 14:09:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jun 2023 14:09:58 +0000   Sat, 10 Jun 2023 14:09:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jun 2023 14:09:58 +0000   Sat, 10 Jun 2023 14:09:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.5
	  Hostname:    image-734000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3905780Ki
	  pods:               110
	System Info:
	  Machine ID:                 34b85883f373410fa098d3479a8ce6b1
	  System UUID:                34b85883f373410fa098d3479a8ce6b1
	  Boot ID:                    1eb83c92-737e-4c3d-a5ca-c9db9f3b92f6
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.27.2
	  Kube-Proxy Version:         v1.27.2
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-image-734000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4s
	  kube-system                 kube-apiserver-image-734000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kube-controller-manager-image-734000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kube-scheduler-image-734000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (2%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From     Message
	  ----    ------                   ----  ----     -------
	  Normal  Starting                 4s    kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  4s    kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4s    kubelet  Node image-734000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4s    kubelet  Node image-734000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4s    kubelet  Node image-734000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                1s    kubelet  Node image-734000 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Jun10 14:09] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.633076] EINJ: EINJ table not found.
	[  +0.517377] systemd-fstab-generator[115]: Ignoring "noauto" for root device
	[  +0.043429] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000797] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.145876] systemd-fstab-generator[479]: Ignoring "noauto" for root device
	[  +0.077803] systemd-fstab-generator[490]: Ignoring "noauto" for root device
	[  +2.759066] systemd-fstab-generator[761]: Ignoring "noauto" for root device
	[  +1.290496] systemd-fstab-generator[934]: Ignoring "noauto" for root device
	[  +0.173211] systemd-fstab-generator[969]: Ignoring "noauto" for root device
	[  +0.079161] systemd-fstab-generator[980]: Ignoring "noauto" for root device
	[  +0.080358] systemd-fstab-generator[993]: Ignoring "noauto" for root device
	[  +1.158488] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.091169] systemd-fstab-generator[1151]: Ignoring "noauto" for root device
	[  +0.087069] systemd-fstab-generator[1162]: Ignoring "noauto" for root device
	[  +0.076004] systemd-fstab-generator[1173]: Ignoring "noauto" for root device
	[  +0.077813] systemd-fstab-generator[1184]: Ignoring "noauto" for root device
	[  +0.080063] systemd-fstab-generator[1224]: Ignoring "noauto" for root device
	[  +2.042005] systemd-fstab-generator[1481]: Ignoring "noauto" for root device
	[  +5.192759] systemd-fstab-generator[2388]: Ignoring "noauto" for root device
	[  +3.159727] kauditd_printk_skb: 41 callbacks suppressed
	
	* 
	* ==> etcd [66e6dfa6e084] <==
	* {"level":"info","ts":"2023-06-10T14:09:51.415Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"58de0efec1d86300","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2023-06-10T14:09:51.416Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-06-10T14:09:51.417Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"58de0efec1d86300","initial-advertise-peer-urls":["https://192.168.105.5:2380"],"listen-peer-urls":["https://192.168.105.5:2380"],"advertise-client-urls":["https://192.168.105.5:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.5:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-06-10T14:09:51.418Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-06-10T14:09:51.418Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","added-peer-id":"58de0efec1d86300","added-peer-peer-urls":["https://192.168.105.5:2380"]}
	{"level":"info","ts":"2023-06-10T14:09:51.418Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-06-10T14:09:51.418Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.105.5:2380"}
	{"level":"info","ts":"2023-06-10T14:09:52.104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 is starting a new election at term 1"}
	{"level":"info","ts":"2023-06-10T14:09:52.104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-06-10T14:09:52.105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgPreVoteResp from 58de0efec1d86300 at term 1"}
	{"level":"info","ts":"2023-06-10T14:09:52.105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became candidate at term 2"}
	{"level":"info","ts":"2023-06-10T14:09:52.105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 received MsgVoteResp from 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-06-10T14:09:52.105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"58de0efec1d86300 became leader at term 2"}
	{"level":"info","ts":"2023-06-10T14:09:52.105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 58de0efec1d86300 elected leader 58de0efec1d86300 at term 2"}
	{"level":"info","ts":"2023-06-10T14:09:52.105Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T14:09:52.106Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"58de0efec1d86300","local-member-attributes":"{Name:image-734000 ClientURLs:[https://192.168.105.5:2379]}","request-path":"/0/members/58de0efec1d86300/attributes","cluster-id":"cd5c0afff2184bea","publish-timeout":"7s"}
	{"level":"info","ts":"2023-06-10T14:09:52.106Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T14:09:52.106Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd5c0afff2184bea","local-member-id":"58de0efec1d86300","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T14:09:52.106Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T14:09:52.106Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-06-10T14:09:52.106Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-06-10T14:09:52.106Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-06-10T14:09:52.115Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-06-10T14:09:52.115Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-06-10T14:09:52.116Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.105.5:2379"}
	
	* 
	* ==> kernel <==
	*  14:09:59 up 0 min,  0 users,  load average: 0.28, 0.06, 0.02
	Linux image-734000 5.10.57 #1 SMP PREEMPT Wed Jun 7 01:52:34 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [4582a397a544] <==
	* I0610 14:09:52.760072       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0610 14:09:52.779760       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0610 14:09:52.779874       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0610 14:09:52.779900       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0610 14:09:52.779916       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0610 14:09:52.779941       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0610 14:09:52.779952       1 cache.go:39] Caches are synced for autoregister controller
	I0610 14:09:52.780485       1 shared_informer.go:318] Caches are synced for configmaps
	I0610 14:09:52.780632       1 controller.go:624] quota admission added evaluator for: namespaces
	I0610 14:09:52.782214       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0610 14:09:52.796655       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0610 14:09:53.551493       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0610 14:09:53.687463       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0610 14:09:53.689303       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0610 14:09:53.689309       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0610 14:09:53.846415       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 14:09:53.859247       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0610 14:09:53.951201       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0610 14:09:53.953797       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.105.5]
	I0610 14:09:53.954307       1 controller.go:624] quota admission added evaluator for: endpoints
	I0610 14:09:53.955979       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 14:09:54.742433       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0610 14:09:55.574929       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0610 14:09:55.578758       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0610 14:09:55.582494       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	
	* 
	* ==> kube-controller-manager [9f14a9ff4677] <==
	* I0610 14:09:51.911523       1 serving.go:348] Generated self-signed cert in-memory
	I0610 14:09:52.205432       1 controllermanager.go:187] "Starting" version="v1.27.2"
	I0610 14:09:52.205523       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0610 14:09:52.206222       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0610 14:09:52.206293       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0610 14:09:52.206697       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0610 14:09:52.206763       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0610 14:09:54.738543       1 shared_informer.go:311] Waiting for caches to sync for tokens
	I0610 14:09:54.743886       1 controllermanager.go:638] "Started controller" controller="statefulset"
	I0610 14:09:54.743962       1 stateful_set.go:161] "Starting stateful set controller"
	I0610 14:09:54.743968       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0610 14:09:54.838778       1 shared_informer.go:318] Caches are synced for tokens
	
	* 
	* ==> kube-scheduler [dc97bb85ee9b] <==
	* W0610 14:09:52.755384       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 14:09:52.755387       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0610 14:09:52.755403       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 14:09:52.755407       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0610 14:09:52.755417       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0610 14:09:52.755421       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0610 14:09:52.755353       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0610 14:09:52.755451       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0610 14:09:52.755302       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 14:09:52.755518       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0610 14:09:52.755291       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 14:09:52.755562       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 14:09:53.581072       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0610 14:09:53.581091       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0610 14:09:53.615982       1 reflector.go:533] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 14:09:53.615997       1 reflector.go:148] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0610 14:09:53.685825       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 14:09:53.685843       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0610 14:09:53.747483       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0610 14:09:53.747503       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0610 14:09:53.790459       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0610 14:09:53.790557       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0610 14:09:53.793313       1 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0610 14:09:53.793328       1 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0610 14:09:56.252888       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sat 2023-06-10 14:09:38 UTC, ends at Sat 2023-06-10 14:09:59 UTC. --
	Jun 10 14:09:55 image-734000 kubelet[2407]: I0610 14:09:55.731577    2407 kubelet_node_status.go:73] "Successfully registered node" node="image-734000"
	Jun 10 14:09:55 image-734000 kubelet[2407]: I0610 14:09:55.737263    2407 topology_manager.go:212] "Topology Admit Handler"
	Jun 10 14:09:55 image-734000 kubelet[2407]: I0610 14:09:55.737319    2407 topology_manager.go:212] "Topology Admit Handler"
	Jun 10 14:09:55 image-734000 kubelet[2407]: I0610 14:09:55.737333    2407 topology_manager.go:212] "Topology Admit Handler"
	Jun 10 14:09:55 image-734000 kubelet[2407]: I0610 14:09:55.737346    2407 topology_manager.go:212] "Topology Admit Handler"
	Jun 10 14:09:55 image-734000 kubelet[2407]: E0610 14:09:55.742341    2407 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-scheduler-image-734000\" already exists" pod="kube-system/kube-scheduler-image-734000"
	Jun 10 14:09:55 image-734000 kubelet[2407]: I0610 14:09:55.926444    2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9b71a4e1538c228c899c79340e489e0-usr-share-ca-certificates\") pod \"kube-controller-manager-image-734000\" (UID: \"a9b71a4e1538c228c899c79340e489e0\") " pod="kube-system/kube-controller-manager-image-734000"
	Jun 10 14:09:55 image-734000 kubelet[2407]: I0610 14:09:55.926476    2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e851966e584c7868bfb006eb4a6cc12b-kubeconfig\") pod \"kube-scheduler-image-734000\" (UID: \"e851966e584c7868bfb006eb4a6cc12b\") " pod="kube-system/kube-scheduler-image-734000"
	Jun 10 14:09:55 image-734000 kubelet[2407]: I0610 14:09:55.926487    2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7a5ef6c536c938dc58e7ea87531b0d57-ca-certs\") pod \"kube-apiserver-image-734000\" (UID: \"7a5ef6c536c938dc58e7ea87531b0d57\") " pod="kube-system/kube-apiserver-image-734000"
	Jun 10 14:09:55 image-734000 kubelet[2407]: I0610 14:09:55.926496    2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7a5ef6c536c938dc58e7ea87531b0d57-k8s-certs\") pod \"kube-apiserver-image-734000\" (UID: \"7a5ef6c536c938dc58e7ea87531b0d57\") " pod="kube-system/kube-apiserver-image-734000"
	Jun 10 14:09:55 image-734000 kubelet[2407]: I0610 14:09:55.926506    2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9b71a4e1538c228c899c79340e489e0-ca-certs\") pod \"kube-controller-manager-image-734000\" (UID: \"a9b71a4e1538c228c899c79340e489e0\") " pod="kube-system/kube-controller-manager-image-734000"
	Jun 10 14:09:55 image-734000 kubelet[2407]: I0610 14:09:55.926523    2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a9b71a4e1538c228c899c79340e489e0-flexvolume-dir\") pod \"kube-controller-manager-image-734000\" (UID: \"a9b71a4e1538c228c899c79340e489e0\") " pod="kube-system/kube-controller-manager-image-734000"
	Jun 10 14:09:55 image-734000 kubelet[2407]: I0610 14:09:55.926531    2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9b71a4e1538c228c899c79340e489e0-k8s-certs\") pod \"kube-controller-manager-image-734000\" (UID: \"a9b71a4e1538c228c899c79340e489e0\") " pod="kube-system/kube-controller-manager-image-734000"
	Jun 10 14:09:55 image-734000 kubelet[2407]: I0610 14:09:55.926540    2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9b71a4e1538c228c899c79340e489e0-kubeconfig\") pod \"kube-controller-manager-image-734000\" (UID: \"a9b71a4e1538c228c899c79340e489e0\") " pod="kube-system/kube-controller-manager-image-734000"
	Jun 10 14:09:55 image-734000 kubelet[2407]: I0610 14:09:55.926558    2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/3f6e2038ad9e1e5bdd0479c1e906e9e3-etcd-certs\") pod \"etcd-image-734000\" (UID: \"3f6e2038ad9e1e5bdd0479c1e906e9e3\") " pod="kube-system/etcd-image-734000"
	Jun 10 14:09:55 image-734000 kubelet[2407]: I0610 14:09:55.926569    2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/3f6e2038ad9e1e5bdd0479c1e906e9e3-etcd-data\") pod \"etcd-image-734000\" (UID: \"3f6e2038ad9e1e5bdd0479c1e906e9e3\") " pod="kube-system/etcd-image-734000"
	Jun 10 14:09:55 image-734000 kubelet[2407]: I0610 14:09:55.926578    2407 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7a5ef6c536c938dc58e7ea87531b0d57-usr-share-ca-certificates\") pod \"kube-apiserver-image-734000\" (UID: \"7a5ef6c536c938dc58e7ea87531b0d57\") " pod="kube-system/kube-apiserver-image-734000"
	Jun 10 14:09:56 image-734000 kubelet[2407]: I0610 14:09:56.607846    2407 apiserver.go:52] "Watching apiserver"
	Jun 10 14:09:56 image-734000 kubelet[2407]: I0610 14:09:56.624820    2407 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world"
	Jun 10 14:09:56 image-734000 kubelet[2407]: I0610 14:09:56.629940    2407 reconciler.go:41] "Reconciler: start to sync state"
	Jun 10 14:09:56 image-734000 kubelet[2407]: I0610 14:09:56.693130    2407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-image-734000" podStartSLOduration=1.693106925 podCreationTimestamp="2023-06-10 14:09:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-10 14:09:56.69308655 +0000 UTC m=+1.131489877" watchObservedRunningTime="2023-06-10 14:09:56.693106925 +0000 UTC m=+1.131510252"
	Jun 10 14:09:56 image-734000 kubelet[2407]: I0610 14:09:56.697506    2407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-image-734000" podStartSLOduration=1.6974903000000001 podCreationTimestamp="2023-06-10 14:09:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-10 14:09:56.696693425 +0000 UTC m=+1.135096710" watchObservedRunningTime="2023-06-10 14:09:56.6974903 +0000 UTC m=+1.135893585"
	Jun 10 14:09:56 image-734000 kubelet[2407]: I0610 14:09:56.701326    2407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-image-734000" podStartSLOduration=2.701310509 podCreationTimestamp="2023-06-10 14:09:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-10 14:09:56.700155425 +0000 UTC m=+1.138558752" watchObservedRunningTime="2023-06-10 14:09:56.701310509 +0000 UTC m=+1.139713835"
	Jun 10 14:09:56 image-734000 kubelet[2407]: I0610 14:09:56.711007    2407 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/etcd-image-734000" podStartSLOduration=1.710971217 podCreationTimestamp="2023-06-10 14:09:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-06-10 14:09:56.704748175 +0000 UTC m=+1.143151502" watchObservedRunningTime="2023-06-10 14:09:56.710971217 +0000 UTC m=+1.149374544"
	Jun 10 14:09:58 image-734000 kubelet[2407]: I0610 14:09:58.289176    2407 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p image-734000 -n image-734000
helpers_test.go:261: (dbg) Run:  kubectl --context image-734000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: storage-provisioner
helpers_test.go:274: ======> post-mortem[TestImageBuild/serial/BuildWithBuildArg]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context image-734000 describe pod storage-provisioner
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context image-734000 describe pod storage-provisioner: exit status 1 (39.467375ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context image-734000 describe pod storage-provisioner: exit status 1
--- FAIL: TestImageBuild/serial/BuildWithBuildArg (1.05s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (56.11s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-433000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-433000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (13.881517833s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-433000 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-433000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6747cb1d-c02f-4bf9-98c5-dec2b1009738] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6747cb1d-c02f-4bf9-98c5-dec2b1009738] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.0121575s
addons_test.go:238: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-433000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-433000 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-433000 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.105.6
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.105.6: exit status 1 (15.041751583s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.105.6" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-433000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-433000 addons disable ingress-dns --alsologtostderr -v=1: (9.842480292s)
addons_test.go:287: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-433000 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-433000 addons disable ingress --alsologtostderr -v=1: (7.060983209s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ingress-addon-legacy-433000 -n ingress-addon-legacy-433000
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-433000 logs -n 25
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                           Args                           |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image   | functional-922000 image ls                               | functional-922000           | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	| image   | functional-922000 image load                             | functional-922000           | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	|         | /Users/jenkins/workspace/addon-resizer-save.tar          |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-922000 image ls                               | functional-922000           | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	| image   | functional-922000 image save --daemon                    | functional-922000           | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	|         | gcr.io/google-containers/addon-resizer:functional-922000 |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-922000                                        | functional-922000           | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	|         | image ls --format short                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-922000                                        | functional-922000           | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT |                     |
	|         | image ls --format yaml                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-922000                                        | functional-922000           | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	|         | image ls --format json                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| ssh     | functional-922000 ssh pgrep                              | functional-922000           | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT |                     |
	|         | buildkitd                                                |                             |         |         |                     |                     |
	| image   | functional-922000                                        | functional-922000           | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	|         | image ls --format table                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                        |                             |         |         |                     |                     |
	| image   | functional-922000 image build -t                         | functional-922000           | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	|         | localhost/my-image:functional-922000                     |                             |         |         |                     |                     |
	|         | testdata/build --alsologtostderr                         |                             |         |         |                     |                     |
	| image   | functional-922000 image ls                               | functional-922000           | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	| delete  | -p functional-922000                                     | functional-922000           | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	| start   | -p image-734000 --driver=qemu2                           | image-734000                | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	|         |                                                          |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                      | image-734000                | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	|         | ./testdata/image-build/test-normal                       |                             |         |         |                     |                     |
	|         | -p image-734000                                          |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                      | image-734000                | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	|         | --build-opt=build-arg=ENV_A=test_env_str                 |                             |         |         |                     |                     |
	|         | --build-opt=no-cache                                     |                             |         |         |                     |                     |
	|         | ./testdata/image-build/test-arg -p                       |                             |         |         |                     |                     |
	|         | image-734000                                             |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                      | image-734000                | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	|         | ./testdata/image-build/test-normal                       |                             |         |         |                     |                     |
	|         | --build-opt=no-cache -p                                  |                             |         |         |                     |                     |
	|         | image-734000                                             |                             |         |         |                     |                     |
	| image   | build -t aaa:latest                                      | image-734000                | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	|         | -f inner/Dockerfile                                      |                             |         |         |                     |                     |
	|         | ./testdata/image-build/test-f                            |                             |         |         |                     |                     |
	|         | -p image-734000                                          |                             |         |         |                     |                     |
	| delete  | -p image-734000                                          | image-734000                | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:09 PDT |
	| start   | -p ingress-addon-legacy-433000                           | ingress-addon-legacy-433000 | jenkins | v1.30.1 | 10 Jun 23 07:09 PDT | 10 Jun 23 07:11 PDT |
	|         | --kubernetes-version=v1.18.20                            |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                   |                             |         |         |                     |                     |
	|         | --driver=qemu2                                           |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-433000                              | ingress-addon-legacy-433000 | jenkins | v1.30.1 | 10 Jun 23 07:11 PDT | 10 Jun 23 07:11 PDT |
	|         | addons enable ingress                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-433000                              | ingress-addon-legacy-433000 | jenkins | v1.30.1 | 10 Jun 23 07:11 PDT | 10 Jun 23 07:11 PDT |
	|         | addons enable ingress-dns                                |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                   |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-433000                              | ingress-addon-legacy-433000 | jenkins | v1.30.1 | 10 Jun 23 07:12 PDT | 10 Jun 23 07:12 PDT |
	|         | ssh curl -s http://127.0.0.1/                            |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                             |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-433000 ip                           | ingress-addon-legacy-433000 | jenkins | v1.30.1 | 10 Jun 23 07:12 PDT | 10 Jun 23 07:12 PDT |
	| addons  | ingress-addon-legacy-433000                              | ingress-addon-legacy-433000 | jenkins | v1.30.1 | 10 Jun 23 07:12 PDT | 10 Jun 23 07:12 PDT |
	|         | addons disable ingress-dns                               |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-433000                              | ingress-addon-legacy-433000 | jenkins | v1.30.1 | 10 Jun 23 07:12 PDT | 10 Jun 23 07:12 PDT |
	|         | addons disable ingress                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                   |                             |         |         |                     |                     |
	|---------|----------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 07:09:59
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 07:09:59.626688    2121 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:09:59.626819    2121 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:09:59.626821    2121 out.go:309] Setting ErrFile to fd 2...
	I0610 07:09:59.626824    2121 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:09:59.626901    2121 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:09:59.627938    2121 out.go:303] Setting JSON to false
	I0610 07:09:59.642920    2121 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":569,"bootTime":1686405630,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:09:59.643015    2121 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:09:59.646945    2121 out.go:177] * [ingress-addon-legacy-433000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:09:59.653961    2121 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:09:59.653994    2121 notify.go:220] Checking for updates...
	I0610 07:09:59.656895    2121 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:09:59.659959    2121 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:09:59.662950    2121 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:09:59.665943    2121 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:09:59.668927    2121 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:09:59.679560    2121 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:09:59.683893    2121 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 07:09:59.690930    2121 start.go:297] selected driver: qemu2
	I0610 07:09:59.690935    2121 start.go:875] validating driver "qemu2" against <nil>
	I0610 07:09:59.690941    2121 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:09:59.692921    2121 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 07:09:59.695855    2121 out.go:177] * Automatically selected the socket_vmnet network
	I0610 07:09:59.699011    2121 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 07:09:59.699042    2121 cni.go:84] Creating CNI manager for ""
	I0610 07:09:59.699050    2121 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0610 07:09:59.699054    2121 start_flags.go:319] config:
	{Name:ingress-addon-legacy-433000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-433000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loc
al ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
:}
	I0610 07:09:59.699154    2121 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:09:59.704887    2121 out.go:177] * Starting control plane node ingress-addon-legacy-433000 in cluster ingress-addon-legacy-433000
	I0610 07:09:59.708948    2121 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0610 07:09:59.907740    2121 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0610 07:09:59.907875    2121 cache.go:57] Caching tarball of preloaded images
	I0610 07:09:59.908702    2121 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0610 07:09:59.912789    2121 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0610 07:09:59.916666    2121 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0610 07:10:00.136278    2121 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4?checksum=md5:c8c260b886393123ce9d312d8ac2379e -> /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4
	I0610 07:10:11.973730    2121 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0610 07:10:11.973855    2121 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 ...
	I0610 07:10:12.722591    2121 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0610 07:10:12.722777    2121 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/config.json ...
	I0610 07:10:12.722793    2121 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/config.json: {Name:mkd9cbc2da3b4255a89324ca74bf77a21ff8ed3b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:10:12.723022    2121 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:10:12.723031    2121 start.go:364] acquiring machines lock for ingress-addon-legacy-433000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:10:12.723060    2121 start.go:368] acquired machines lock for "ingress-addon-legacy-433000" in 22µs
	I0610 07:10:12.723070    2121 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-433000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 K
ubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-433000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:10:12.723106    2121 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:10:12.732098    2121 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0610 07:10:12.746596    2121 start.go:159] libmachine.API.Create for "ingress-addon-legacy-433000" (driver="qemu2")
	I0610 07:10:12.746618    2121 client.go:168] LocalClient.Create starting
	I0610 07:10:12.746693    2121 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:10:12.746715    2121 main.go:141] libmachine: Decoding PEM data...
	I0610 07:10:12.746723    2121 main.go:141] libmachine: Parsing certificate...
	I0610 07:10:12.746766    2121 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:10:12.746780    2121 main.go:141] libmachine: Decoding PEM data...
	I0610 07:10:12.746789    2121 main.go:141] libmachine: Parsing certificate...
	I0610 07:10:12.747131    2121 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:10:12.880732    2121 main.go:141] libmachine: Creating SSH key...
	I0610 07:10:13.173297    2121 main.go:141] libmachine: Creating Disk image...
	I0610 07:10:13.173307    2121 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:10:13.173502    2121 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/ingress-addon-legacy-433000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/ingress-addon-legacy-433000/disk.qcow2
	I0610 07:10:13.182931    2121 main.go:141] libmachine: STDOUT: 
	I0610 07:10:13.182945    2121 main.go:141] libmachine: STDERR: 
	I0610 07:10:13.183006    2121 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/ingress-addon-legacy-433000/disk.qcow2 +20000M
	I0610 07:10:13.190427    2121 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:10:13.190444    2121 main.go:141] libmachine: STDERR: 
	I0610 07:10:13.190465    2121 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/ingress-addon-legacy-433000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/ingress-addon-legacy-433000/disk.qcow2
	I0610 07:10:13.190471    2121 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:10:13.190509    2121 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4096 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/ingress-addon-legacy-433000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/ingress-addon-legacy-433000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/ingress-addon-legacy-433000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:58:12:01:f8:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/ingress-addon-legacy-433000/disk.qcow2
	I0610 07:10:13.224459    2121 main.go:141] libmachine: STDOUT: 
	I0610 07:10:13.224502    2121 main.go:141] libmachine: STDERR: 
	I0610 07:10:13.224507    2121 main.go:141] libmachine: Attempt 0
	I0610 07:10:13.224521    2121 main.go:141] libmachine: Searching for ba:58:12:1:f8:72 in /var/db/dhcpd_leases ...
	I0610 07:10:13.224594    2121 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0610 07:10:13.224615    2121 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:66:c1:50:46:e2:b4 ID:1,66:c1:50:46:e2:b4 Lease:0x6485d5a2}
	I0610 07:10:13.224621    2121 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:d6:1a:3c:8a:49:10 ID:1,d6:1a:3c:8a:49:10 Lease:0x6485d4cd}
	I0610 07:10:13.224627    2121 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:de:12:f2:ec:9b:79 ID:1,de:12:f2:ec:9b:79 Lease:0x64848340}
	I0610 07:10:13.224631    2121 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:e2:48:cb:84:48:b2 ID:1,e2:48:cb:84:48:b2 Lease:0x6485d459}
	I0610 07:10:15.226697    2121 main.go:141] libmachine: Attempt 1
	I0610 07:10:15.226789    2121 main.go:141] libmachine: Searching for ba:58:12:1:f8:72 in /var/db/dhcpd_leases ...
	I0610 07:10:15.227174    2121 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0610 07:10:15.227227    2121 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:66:c1:50:46:e2:b4 ID:1,66:c1:50:46:e2:b4 Lease:0x6485d5a2}
	I0610 07:10:15.227296    2121 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:d6:1a:3c:8a:49:10 ID:1,d6:1a:3c:8a:49:10 Lease:0x6485d4cd}
	I0610 07:10:15.227330    2121 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:de:12:f2:ec:9b:79 ID:1,de:12:f2:ec:9b:79 Lease:0x64848340}
	I0610 07:10:15.227362    2121 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:e2:48:cb:84:48:b2 ID:1,e2:48:cb:84:48:b2 Lease:0x6485d459}
	I0610 07:10:17.229457    2121 main.go:141] libmachine: Attempt 2
	I0610 07:10:17.229493    2121 main.go:141] libmachine: Searching for ba:58:12:1:f8:72 in /var/db/dhcpd_leases ...
	I0610 07:10:17.229632    2121 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0610 07:10:17.229647    2121 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:66:c1:50:46:e2:b4 ID:1,66:c1:50:46:e2:b4 Lease:0x6485d5a2}
	I0610 07:10:17.229653    2121 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:d6:1a:3c:8a:49:10 ID:1,d6:1a:3c:8a:49:10 Lease:0x6485d4cd}
	I0610 07:10:17.229658    2121 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:de:12:f2:ec:9b:79 ID:1,de:12:f2:ec:9b:79 Lease:0x64848340}
	I0610 07:10:17.229663    2121 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:e2:48:cb:84:48:b2 ID:1,e2:48:cb:84:48:b2 Lease:0x6485d459}
	I0610 07:10:19.231650    2121 main.go:141] libmachine: Attempt 3
	I0610 07:10:19.231669    2121 main.go:141] libmachine: Searching for ba:58:12:1:f8:72 in /var/db/dhcpd_leases ...
	I0610 07:10:19.231756    2121 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0610 07:10:19.231764    2121 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:66:c1:50:46:e2:b4 ID:1,66:c1:50:46:e2:b4 Lease:0x6485d5a2}
	I0610 07:10:19.231784    2121 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:d6:1a:3c:8a:49:10 ID:1,d6:1a:3c:8a:49:10 Lease:0x6485d4cd}
	I0610 07:10:19.231800    2121 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:de:12:f2:ec:9b:79 ID:1,de:12:f2:ec:9b:79 Lease:0x64848340}
	I0610 07:10:19.231806    2121 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:e2:48:cb:84:48:b2 ID:1,e2:48:cb:84:48:b2 Lease:0x6485d459}
	I0610 07:10:21.233821    2121 main.go:141] libmachine: Attempt 4
	I0610 07:10:21.233846    2121 main.go:141] libmachine: Searching for ba:58:12:1:f8:72 in /var/db/dhcpd_leases ...
	I0610 07:10:21.233899    2121 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0610 07:10:21.233911    2121 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:66:c1:50:46:e2:b4 ID:1,66:c1:50:46:e2:b4 Lease:0x6485d5a2}
	I0610 07:10:21.233916    2121 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:d6:1a:3c:8a:49:10 ID:1,d6:1a:3c:8a:49:10 Lease:0x6485d4cd}
	I0610 07:10:21.233938    2121 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:de:12:f2:ec:9b:79 ID:1,de:12:f2:ec:9b:79 Lease:0x64848340}
	I0610 07:10:21.233942    2121 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:e2:48:cb:84:48:b2 ID:1,e2:48:cb:84:48:b2 Lease:0x6485d459}
	I0610 07:10:23.234425    2121 main.go:141] libmachine: Attempt 5
	I0610 07:10:23.234445    2121 main.go:141] libmachine: Searching for ba:58:12:1:f8:72 in /var/db/dhcpd_leases ...
	I0610 07:10:23.234525    2121 main.go:141] libmachine: Found 4 entries in /var/db/dhcpd_leases!
	I0610 07:10:23.234534    2121 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.5 HWAddress:66:c1:50:46:e2:b4 ID:1,66:c1:50:46:e2:b4 Lease:0x6485d5a2}
	I0610 07:10:23.234540    2121 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.4 HWAddress:d6:1a:3c:8a:49:10 ID:1,d6:1a:3c:8a:49:10 Lease:0x6485d4cd}
	I0610 07:10:23.234545    2121 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.3 HWAddress:de:12:f2:ec:9b:79 ID:1,de:12:f2:ec:9b:79 Lease:0x64848340}
	I0610 07:10:23.234551    2121 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:e2:48:cb:84:48:b2 ID:1,e2:48:cb:84:48:b2 Lease:0x6485d459}
	I0610 07:10:25.236578    2121 main.go:141] libmachine: Attempt 6
	I0610 07:10:25.236628    2121 main.go:141] libmachine: Searching for ba:58:12:1:f8:72 in /var/db/dhcpd_leases ...
	I0610 07:10:25.236765    2121 main.go:141] libmachine: Found 5 entries in /var/db/dhcpd_leases!
	I0610 07:10:25.236778    2121 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.6 HWAddress:ba:58:12:1:f8:72 ID:1,ba:58:12:1:f8:72 Lease:0x6485d5d0}
	I0610 07:10:25.236785    2121 main.go:141] libmachine: Found match: ba:58:12:1:f8:72
	I0610 07:10:25.236795    2121 main.go:141] libmachine: IP: 192.168.105.6
	I0610 07:10:25.236825    2121 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.6)...
	I0610 07:10:27.258099    2121 machine.go:88] provisioning docker machine ...
	I0610 07:10:27.258162    2121 buildroot.go:166] provisioning hostname "ingress-addon-legacy-433000"
	I0610 07:10:27.258400    2121 main.go:141] libmachine: Using SSH client type: native
	I0610 07:10:27.259200    2121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d286d0] 0x100d2b130 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0610 07:10:27.259288    2121 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-433000 && echo "ingress-addon-legacy-433000" | sudo tee /etc/hostname
	I0610 07:10:27.353584    2121 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-433000
	
	I0610 07:10:27.353710    2121 main.go:141] libmachine: Using SSH client type: native
	I0610 07:10:27.354188    2121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d286d0] 0x100d2b130 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0610 07:10:27.354211    2121 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-433000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-433000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-433000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0610 07:10:27.429045    2121 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0610 07:10:27.429066    2121 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/15074-894/.minikube CaCertPath:/Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15074-894/.minikube}
	I0610 07:10:27.429077    2121 buildroot.go:174] setting up certificates
	I0610 07:10:27.429085    2121 provision.go:83] configureAuth start
	I0610 07:10:27.429093    2121 provision.go:138] copyHostCerts
	I0610 07:10:27.429147    2121 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15074-894/.minikube/key.pem
	I0610 07:10:27.429242    2121 exec_runner.go:144] found /Users/jenkins/minikube-integration/15074-894/.minikube/key.pem, removing ...
	I0610 07:10:27.429250    2121 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15074-894/.minikube/key.pem
	I0610 07:10:27.429442    2121 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15074-894/.minikube/key.pem (1679 bytes)
	I0610 07:10:27.429715    2121 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15074-894/.minikube/ca.pem
	I0610 07:10:27.429751    2121 exec_runner.go:144] found /Users/jenkins/minikube-integration/15074-894/.minikube/ca.pem, removing ...
	I0610 07:10:27.429754    2121 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15074-894/.minikube/ca.pem
	I0610 07:10:27.429840    2121 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15074-894/.minikube/ca.pem (1078 bytes)
	I0610 07:10:27.429965    2121 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15074-894/.minikube/cert.pem
	I0610 07:10:27.429998    2121 exec_runner.go:144] found /Users/jenkins/minikube-integration/15074-894/.minikube/cert.pem, removing ...
	I0610 07:10:27.430002    2121 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/15074-894/.minikube/cert.pem
	I0610 07:10:27.430107    2121 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15074-894/.minikube/cert.pem (1123 bytes)
	I0610 07:10:27.430235    2121 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15074-894/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-433000 san=[192.168.105.6 192.168.105.6 localhost 127.0.0.1 minikube ingress-addon-legacy-433000]
	I0610 07:10:27.573308    2121 provision.go:172] copyRemoteCerts
	I0610 07:10:27.573369    2121 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0610 07:10:27.573379    2121 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/ingress-addon-legacy-433000/id_rsa Username:docker}
	I0610 07:10:27.606731    2121 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15074-894/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0610 07:10:27.606787    2121 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0610 07:10:27.613833    2121 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0610 07:10:27.613873    2121 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0610 07:10:27.620510    2121 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15074-894/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0610 07:10:27.620547    2121 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0610 07:10:27.627699    2121 provision.go:86] duration metric: configureAuth took 198.613125ms
	I0610 07:10:27.627710    2121 buildroot.go:189] setting minikube options for container-runtime
	I0610 07:10:27.627805    2121 config.go:182] Loaded profile config "ingress-addon-legacy-433000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0610 07:10:27.627840    2121 main.go:141] libmachine: Using SSH client type: native
	I0610 07:10:27.628066    2121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d286d0] 0x100d2b130 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0610 07:10:27.628071    2121 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0610 07:10:27.693649    2121 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0610 07:10:27.693656    2121 buildroot.go:70] root file system type: tmpfs
	I0610 07:10:27.693709    2121 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0610 07:10:27.693751    2121 main.go:141] libmachine: Using SSH client type: native
	I0610 07:10:27.694009    2121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d286d0] 0x100d2b130 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0610 07:10:27.694047    2121 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0610 07:10:27.762035    2121 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0610 07:10:27.762084    2121 main.go:141] libmachine: Using SSH client type: native
	I0610 07:10:27.762370    2121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d286d0] 0x100d2b130 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0610 07:10:27.762387    2121 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0610 07:10:28.125941    2121 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0610 07:10:28.125957    2121 machine.go:91] provisioned docker machine in 867.86025ms
	I0610 07:10:28.125963    2121 client.go:171] LocalClient.Create took 15.379861833s
	I0610 07:10:28.125980    2121 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-433000" took 15.379906666s
	I0610 07:10:28.125985    2121 start.go:300] post-start starting for "ingress-addon-legacy-433000" (driver="qemu2")
	I0610 07:10:28.125988    2121 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0610 07:10:28.126065    2121 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0610 07:10:28.126074    2121 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/ingress-addon-legacy-433000/id_rsa Username:docker}
	I0610 07:10:28.166202    2121 ssh_runner.go:195] Run: cat /etc/os-release
	I0610 07:10:28.167615    2121 info.go:137] Remote host: Buildroot 2021.02.12
	I0610 07:10:28.167626    2121 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15074-894/.minikube/addons for local assets ...
	I0610 07:10:28.167694    2121 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15074-894/.minikube/files for local assets ...
	I0610 07:10:28.167809    2121 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15074-894/.minikube/files/etc/ssl/certs/13362.pem -> 13362.pem in /etc/ssl/certs
	I0610 07:10:28.167813    2121 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15074-894/.minikube/files/etc/ssl/certs/13362.pem -> /etc/ssl/certs/13362.pem
	I0610 07:10:28.167922    2121 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0610 07:10:28.171045    2121 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/files/etc/ssl/certs/13362.pem --> /etc/ssl/certs/13362.pem (1708 bytes)
	I0610 07:10:28.177757    2121 start.go:303] post-start completed in 51.768958ms
	I0610 07:10:28.178176    2121 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/config.json ...
	I0610 07:10:28.178333    2121 start.go:128] duration metric: createHost completed in 15.455746792s
	I0610 07:10:28.178358    2121 main.go:141] libmachine: Using SSH client type: native
	I0610 07:10:28.178580    2121 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100d286d0] 0x100d2b130 <nil>  [] 0s} 192.168.105.6 22 <nil> <nil>}
	I0610 07:10:28.178584    2121 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0610 07:10:28.240550    2121 main.go:141] libmachine: SSH cmd err, output: <nil>: 1686406228.653540502
	
	I0610 07:10:28.240562    2121 fix.go:207] guest clock: 1686406228.653540502
	I0610 07:10:28.240566    2121 fix.go:220] Guest: 2023-06-10 07:10:28.653540502 -0700 PDT Remote: 2023-06-10 07:10:28.178336 -0700 PDT m=+28.571677584 (delta=475.204502ms)
	I0610 07:10:28.240579    2121 fix.go:191] guest clock delta is within tolerance: 475.204502ms
	I0610 07:10:28.240582    2121 start.go:83] releasing machines lock for "ingress-addon-legacy-433000", held for 15.518042542s
	I0610 07:10:28.240925    2121 ssh_runner.go:195] Run: cat /version.json
	I0610 07:10:28.240934    2121 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0610 07:10:28.240937    2121 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/ingress-addon-legacy-433000/id_rsa Username:docker}
	I0610 07:10:28.240950    2121 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/ingress-addon-legacy-433000/id_rsa Username:docker}
	I0610 07:10:28.274154    2121 ssh_runner.go:195] Run: systemctl --version
	I0610 07:10:28.316432    2121 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0610 07:10:28.318592    2121 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0610 07:10:28.318628    2121 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0610 07:10:28.322005    2121 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0610 07:10:28.327027    2121 cni.go:307] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0610 07:10:28.327034    2121 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0610 07:10:28.327105    2121 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 07:10:28.341954    2121 docker.go:633] Got preloaded images: 
	I0610 07:10:28.341968    2121 docker.go:639] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0610 07:10:28.342035    2121 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 07:10:28.344812    2121 ssh_runner.go:195] Run: which lz4
	I0610 07:10:28.346150    2121 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0610 07:10:28.346233    2121 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 07:10:28.347420    2121 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 07:10:28.347432    2121 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0610 07:10:30.051748    2121 docker.go:597] Took 1.705614 seconds to copy over tarball
	I0610 07:10:30.051807    2121 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 07:10:31.397993    2121 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.346210667s)
	I0610 07:10:31.398007    2121 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 07:10:31.423519    2121 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 07:10:31.428300    2121 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0610 07:10:31.435252    2121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 07:10:31.506821    2121 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 07:10:32.985486    2121 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.47869775s)
	I0610 07:10:32.985509    2121 start.go:481] detecting cgroup driver to use...
	I0610 07:10:32.985588    2121 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 07:10:32.991042    2121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0610 07:10:32.994068    2121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0610 07:10:32.997119    2121 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0610 07:10:32.997151    2121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0610 07:10:33.000139    2121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 07:10:33.003736    2121 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0610 07:10:33.007456    2121 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0610 07:10:33.010884    2121 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0610 07:10:33.014430    2121 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0610 07:10:33.017429    2121 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0610 07:10:33.020120    2121 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0610 07:10:33.023396    2121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 07:10:33.107918    2121 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0610 07:10:33.114222    2121 start.go:481] detecting cgroup driver to use...
	I0610 07:10:33.114288    2121 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0610 07:10:33.120303    2121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 07:10:33.125387    2121 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0610 07:10:33.131410    2121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0610 07:10:33.136503    2121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 07:10:33.141200    2121 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0610 07:10:33.183382    2121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0610 07:10:33.188805    2121 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0610 07:10:33.194257    2121 ssh_runner.go:195] Run: which cri-dockerd
	I0610 07:10:33.195590    2121 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0610 07:10:33.198339    2121 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0610 07:10:33.203256    2121 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0610 07:10:33.287477    2121 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0610 07:10:33.367734    2121 docker.go:532] configuring docker to use "cgroupfs" as cgroup driver...
	I0610 07:10:33.367747    2121 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0610 07:10:33.373352    2121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 07:10:33.455462    2121 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 07:10:34.623299    2121 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.167858834s)
	I0610 07:10:34.623367    2121 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 07:10:34.631084    2121 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0610 07:10:34.642466    2121 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 24.0.2 ...
	I0610 07:10:34.642618    2121 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0610 07:10:34.644035    2121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 07:10:34.647847    2121 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0610 07:10:34.647887    2121 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 07:10:34.653596    2121 docker.go:633] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0610 07:10:34.653602    2121 docker.go:639] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0610 07:10:34.653651    2121 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 07:10:34.656409    2121 ssh_runner.go:195] Run: which lz4
	I0610 07:10:34.657597    2121 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 -> /preloaded.tar.lz4
	I0610 07:10:34.657687    2121 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0610 07:10:34.658782    2121 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0610 07:10:34.658793    2121 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (459739018 bytes)
	I0610 07:10:36.374486    2121 docker.go:597] Took 1.716905 seconds to copy over tarball
	I0610 07:10:36.374554    2121 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0610 07:10:37.870674    2121 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (1.496150125s)
	I0610 07:10:37.870690    2121 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0610 07:10:37.895806    2121 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0610 07:10:37.899545    2121 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2502 bytes)
	I0610 07:10:37.905174    2121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0610 07:10:37.980908    2121 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0610 07:10:39.610446    2121 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.629574625s)
	I0610 07:10:39.610537    2121 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0610 07:10:39.622076    2121 docker.go:633] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0610 07:10:39.622083    2121 docker.go:639] registry.k8s.io/kube-apiserver:v1.18.20 wasn't preloaded
	I0610 07:10:39.622093    2121 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0610 07:10:39.664874    2121 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0610 07:10:39.664962    2121 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0610 07:10:39.665012    2121 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 07:10:39.665188    2121 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0610 07:10:39.665894    2121 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0610 07:10:39.665937    2121 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0610 07:10:39.665957    2121 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0610 07:10:39.666285    2121 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0610 07:10:39.675248    2121 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0610 07:10:39.675259    2121 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0610 07:10:39.675286    2121 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 07:10:39.675304    2121 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0610 07:10:39.675308    2121 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0610 07:10:39.675325    2121 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0610 07:10:39.675308    2121 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0610 07:10:39.675350    2121 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	W0610 07:10:40.783665    2121 image.go:265] image registry.k8s.io/etcd:3.4.3-0 arch mismatch: want arm64 got amd64. fixing
	I0610 07:10:40.783793    2121 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0610 07:10:40.789791    2121 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "29dd247b2572efbe28fcaea3fef1c5d72593da59f7350e3f6d2e6618983f9c03" in container runtime
	I0610 07:10:40.789814    2121 docker.go:313] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0610 07:10:40.789854    2121 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I0610 07:10:40.805586    2121 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0
	W0610 07:10:40.942841    2121 image.go:265] image registry.k8s.io/kube-proxy:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0610 07:10:40.942987    2121 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0610 07:10:40.955910    2121 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "b11cdc97ac6ac4ef2b3b0662edbe16597084b17cbc8e3d61fcaf4ef827a7ed18" in container runtime
	I0610 07:10:40.955928    2121 docker.go:313] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0610 07:10:40.955962    2121 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.18.20
	I0610 07:10:40.962416    2121 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.18.20
	I0610 07:10:41.181743    2121 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0610 07:10:41.188017    2121 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "2a060e2e7101d419352bf82c613158587400be743482d9a537ec4a9d1b4eb93c" in container runtime
	I0610 07:10:41.188049    2121 docker.go:313] Removing image: registry.k8s.io/pause:3.2
	I0610 07:10:41.188100    2121 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.2
	I0610 07:10:41.194158    2121 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/pause_3.2
	W0610 07:10:41.260535    2121 image.go:265] image registry.k8s.io/coredns:1.6.7 arch mismatch: want arm64 got amd64. fixing
	I0610 07:10:41.260645    2121 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0610 07:10:41.266578    2121 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "ff3af22d8878afc6985d3fec3e066d00ef431aa166c3a01ac58f1990adc92a2c" in container runtime
	I0610 07:10:41.266599    2121 docker.go:313] Removing image: registry.k8s.io/coredns:1.6.7
	I0610 07:10:41.266641    2121 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.7
	W0610 07:10:41.267657    2121 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0610 07:10:41.267733    2121 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 07:10:41.277570    2121 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0610 07:10:41.277599    2121 docker.go:313] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 07:10:41.277573    2121 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/coredns_1.6.7
	I0610 07:10:41.277675    2121 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 07:10:41.289425    2121 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	W0610 07:10:41.404594    2121 image.go:265] image registry.k8s.io/kube-controller-manager:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0610 07:10:41.404708    2121 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0610 07:10:41.410467    2121 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "297c79afbdb81ceb4cf857e0c54a0de7b6ce7ebe01e6cab68fc8baf342be3ea7" in container runtime
	I0610 07:10:41.410486    2121 docker.go:313] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0610 07:10:41.410525    2121 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0610 07:10:41.416023    2121 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.18.20
	W0610 07:10:41.688860    2121 image.go:265] image registry.k8s.io/kube-apiserver:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0610 07:10:41.689370    2121 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0610 07:10:41.713375    2121 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "d353007847ec85700463981309a5846c8d9c93fbcd1323104266212926d68257" in container runtime
	I0610 07:10:41.713462    2121 docker.go:313] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0610 07:10:41.713593    2121 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0610 07:10:41.729331    2121 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.18.20
	W0610 07:10:41.891275    2121 image.go:265] image registry.k8s.io/kube-scheduler:v1.18.20 arch mismatch: want arm64 got amd64. fixing
	I0610 07:10:41.891757    2121 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0610 07:10:41.913570    2121 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "177548d745cb87f773d02f41d453af2f2a1479dbe3c32e749cf6d8145c005e79" in container runtime
	I0610 07:10:41.913652    2121 docker.go:313] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0610 07:10:41.913888    2121 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0610 07:10:41.929229    2121 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.18.20
	I0610 07:10:41.929286    2121 cache_images.go:92] LoadImages completed in 2.307263042s
	W0610 07:10:41.929384    2121 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I0610 07:10:41.929490    2121 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0610 07:10:41.945928    2121 cni.go:84] Creating CNI manager for ""
	I0610 07:10:41.945947    2121 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0610 07:10:41.945967    2121 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0610 07:10:41.945987    2121 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.6 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-433000 NodeName:ingress-addon-legacy-433000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0610 07:10:41.946154    2121 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.6
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-433000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.6
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.6"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0610 07:10:41.946226    2121 kubeadm.go:971] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-433000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-433000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0610 07:10:41.946365    2121 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0610 07:10:41.951973    2121 binaries.go:44] Found k8s binaries, skipping transfer
	I0610 07:10:41.952016    2121 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0610 07:10:41.956143    2121 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (355 bytes)
	I0610 07:10:41.963488    2121 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0610 07:10:41.969971    2121 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2127 bytes)
	I0610 07:10:41.976342    2121 ssh_runner.go:195] Run: grep 192.168.105.6	control-plane.minikube.internal$ /etc/hosts
	I0610 07:10:41.977646    2121 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.6	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0610 07:10:41.981420    2121 certs.go:56] Setting up /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000 for IP: 192.168.105.6
	I0610 07:10:41.981434    2121 certs.go:190] acquiring lock for shared ca certs: {Name:mk2bb46910d2e2fc8cdcab49d7502062bd19dc79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:10:41.981779    2121 certs.go:199] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15074-894/.minikube/ca.key
	I0610 07:10:41.981919    2121 certs.go:199] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15074-894/.minikube/proxy-client-ca.key
	I0610 07:10:41.981946    2121 certs.go:319] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/client.key
	I0610 07:10:41.981952    2121 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/client.crt with IP's: []
	I0610 07:10:42.128093    2121 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/client.crt ...
	I0610 07:10:42.128102    2121 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/client.crt: {Name:mk0d6833b6c1c3ee3f7900aaa31750a88c9a10b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:10:42.128347    2121 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/client.key ...
	I0610 07:10:42.128351    2121 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/client.key: {Name:mk039b8b87b9d6fa0e51b833dcb801ebf5df9397 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:10:42.128473    2121 certs.go:319] generating minikube signed cert: /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/apiserver.key.b354f644
	I0610 07:10:42.128484    2121 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/apiserver.crt.b354f644 with IP's: [192.168.105.6 10.96.0.1 127.0.0.1 10.0.0.1]
	I0610 07:10:42.220325    2121 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/apiserver.crt.b354f644 ...
	I0610 07:10:42.220329    2121 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/apiserver.crt.b354f644: {Name:mkc56573016887bbf4348cfbce12bbc50444a6bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:10:42.220516    2121 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/apiserver.key.b354f644 ...
	I0610 07:10:42.220519    2121 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/apiserver.key.b354f644: {Name:mke791a822a30a6e9a5efcf2d1c9bede74a35b37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:10:42.220644    2121 certs.go:337] copying /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/apiserver.crt.b354f644 -> /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/apiserver.crt
	I0610 07:10:42.220789    2121 certs.go:341] copying /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/apiserver.key.b354f644 -> /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/apiserver.key
	I0610 07:10:42.220890    2121 certs.go:319] generating aggregator signed cert: /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/proxy-client.key
	I0610 07:10:42.220897    2121 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/proxy-client.crt with IP's: []
	I0610 07:10:42.293224    2121 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/proxy-client.crt ...
	I0610 07:10:42.293232    2121 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/proxy-client.crt: {Name:mkdab7ea7ce17552ab43dc5462383bcfeb8145ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:10:42.293400    2121 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/proxy-client.key ...
	I0610 07:10:42.293406    2121 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/proxy-client.key: {Name:mk8f44a2f2e645182b7958dad5a2325f1f4d9376 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:10:42.293515    2121 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0610 07:10:42.293534    2121 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0610 07:10:42.293552    2121 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0610 07:10:42.293569    2121 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0610 07:10:42.293581    2121 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15074-894/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0610 07:10:42.293601    2121 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15074-894/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0610 07:10:42.293611    2121 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15074-894/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0610 07:10:42.293627    2121 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15074-894/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0610 07:10:42.293722    2121 certs.go:437] found cert: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/Users/jenkins/minikube-integration/15074-894/.minikube/certs/1336.pem (1338 bytes)
	W0610 07:10:42.294026    2121 certs.go:433] ignoring /Users/jenkins/minikube-integration/15074-894/.minikube/certs/Users/jenkins/minikube-integration/15074-894/.minikube/certs/1336_empty.pem, impossibly tiny 0 bytes
	I0610 07:10:42.294041    2121 certs.go:437] found cert: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca-key.pem (1675 bytes)
	I0610 07:10:42.294075    2121 certs.go:437] found cert: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem (1078 bytes)
	I0610 07:10:42.294109    2121 certs.go:437] found cert: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem (1123 bytes)
	I0610 07:10:42.294138    2121 certs.go:437] found cert: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/Users/jenkins/minikube-integration/15074-894/.minikube/certs/key.pem (1679 bytes)
	I0610 07:10:42.294219    2121 certs.go:437] found cert: /Users/jenkins/minikube-integration/15074-894/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15074-894/.minikube/files/etc/ssl/certs/13362.pem (1708 bytes)
	I0610 07:10:42.294247    2121 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15074-894/.minikube/files/etc/ssl/certs/13362.pem -> /usr/share/ca-certificates/13362.pem
	I0610 07:10:42.294264    2121 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15074-894/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0610 07:10:42.294275    2121 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15074-894/.minikube/certs/1336.pem -> /usr/share/ca-certificates/1336.pem
	I0610 07:10:42.294669    2121 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0610 07:10:42.303429    2121 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0610 07:10:42.310657    2121 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0610 07:10:42.317512    2121 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0610 07:10:42.324062    2121 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0610 07:10:42.331185    2121 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0610 07:10:42.338390    2121 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0610 07:10:42.345362    2121 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0610 07:10:42.352165    2121 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/files/etc/ssl/certs/13362.pem --> /usr/share/ca-certificates/13362.pem (1708 bytes)
	I0610 07:10:42.359131    2121 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0610 07:10:42.366378    2121 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15074-894/.minikube/certs/1336.pem --> /usr/share/ca-certificates/1336.pem (1338 bytes)
	I0610 07:10:42.373371    2121 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0610 07:10:42.378045    2121 ssh_runner.go:195] Run: openssl version
	I0610 07:10:42.380044    2121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/13362.pem && ln -fs /usr/share/ca-certificates/13362.pem /etc/ssl/certs/13362.pem"
	I0610 07:10:42.383542    2121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/13362.pem
	I0610 07:10:42.385103    2121 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jun 10 14:05 /usr/share/ca-certificates/13362.pem
	I0610 07:10:42.385125    2121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/13362.pem
	I0610 07:10:42.386862    2121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/13362.pem /etc/ssl/certs/3ec20f2e.0"
	I0610 07:10:42.390048    2121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0610 07:10:42.392976    2121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0610 07:10:42.394370    2121 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jun 10 14:05 /usr/share/ca-certificates/minikubeCA.pem
	I0610 07:10:42.394391    2121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0610 07:10:42.396254    2121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0610 07:10:42.399424    2121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1336.pem && ln -fs /usr/share/ca-certificates/1336.pem /etc/ssl/certs/1336.pem"
	I0610 07:10:42.402695    2121 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1336.pem
	I0610 07:10:42.404142    2121 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jun 10 14:05 /usr/share/ca-certificates/1336.pem
	I0610 07:10:42.404164    2121 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1336.pem
	I0610 07:10:42.405964    2121 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1336.pem /etc/ssl/certs/51391683.0"
	I0610 07:10:42.408704    2121 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0610 07:10:42.409895    2121 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0610 07:10:42.409921    2121 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-433000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.18.20 ClusterName:ingress-addon-legacy-433000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:10:42.409984    2121 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0610 07:10:42.415406    2121 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0610 07:10:42.418745    2121 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0610 07:10:42.421500    2121 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0610 07:10:42.424048    2121 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0610 07:10:42.424070    2121 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0610 07:10:42.449249    2121 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0610 07:10:42.449284    2121 kubeadm.go:322] [preflight] Running pre-flight checks
	I0610 07:10:42.529019    2121 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0610 07:10:42.529072    2121 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0610 07:10:42.529126    2121 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0610 07:10:42.578599    2121 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0610 07:10:42.579565    2121 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0610 07:10:42.579587    2121 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0610 07:10:42.660958    2121 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0610 07:10:42.670094    2121 out.go:204]   - Generating certificates and keys ...
	I0610 07:10:42.670127    2121 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0610 07:10:42.670159    2121 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0610 07:10:42.747785    2121 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0610 07:10:42.853995    2121 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0610 07:10:42.962883    2121 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0610 07:10:43.045922    2121 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0610 07:10:43.086464    2121 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0610 07:10:43.086602    2121 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-433000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0610 07:10:43.127363    2121 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0610 07:10:43.127426    2121 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-433000 localhost] and IPs [192.168.105.6 127.0.0.1 ::1]
	I0610 07:10:43.178506    2121 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0610 07:10:43.271972    2121 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0610 07:10:43.373828    2121 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0610 07:10:43.374038    2121 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0610 07:10:43.486026    2121 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0610 07:10:43.640932    2121 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0610 07:10:43.721976    2121 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0610 07:10:43.830367    2121 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0610 07:10:43.830580    2121 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0610 07:10:43.834564    2121 out.go:204]   - Booting up control plane ...
	I0610 07:10:43.834620    2121 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0610 07:10:43.834666    2121 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0610 07:10:43.836730    2121 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0610 07:10:43.836779    2121 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0610 07:10:43.836858    2121 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0610 07:10:55.341013    2121 kubeadm.go:322] [apiclient] All control plane components are healthy after 11.504505 seconds
	I0610 07:10:55.341339    2121 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0610 07:10:55.370037    2121 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0610 07:10:55.884651    2121 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0610 07:10:55.884911    2121 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-433000 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0610 07:10:56.392364    2121 kubeadm.go:322] [bootstrap-token] Using token: o8s94c.vtet3poret11vbei
	I0610 07:10:56.398976    2121 out.go:204]   - Configuring RBAC rules ...
	I0610 07:10:56.399099    2121 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0610 07:10:56.399195    2121 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0610 07:10:56.403547    2121 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0610 07:10:56.405038    2121 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0610 07:10:56.406245    2121 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0610 07:10:56.407678    2121 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0610 07:10:56.412257    2121 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0610 07:10:56.601848    2121 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0610 07:10:56.806252    2121 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0610 07:10:56.806839    2121 kubeadm.go:322] 
	I0610 07:10:56.806882    2121 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0610 07:10:56.806888    2121 kubeadm.go:322] 
	I0610 07:10:56.806939    2121 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0610 07:10:56.806947    2121 kubeadm.go:322] 
	I0610 07:10:56.806965    2121 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0610 07:10:56.807006    2121 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0610 07:10:56.807047    2121 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0610 07:10:56.807057    2121 kubeadm.go:322] 
	I0610 07:10:56.807091    2121 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0610 07:10:56.807147    2121 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0610 07:10:56.807200    2121 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0610 07:10:56.807205    2121 kubeadm.go:322] 
	I0610 07:10:56.807284    2121 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0610 07:10:56.807335    2121 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0610 07:10:56.807340    2121 kubeadm.go:322] 
	I0610 07:10:56.807418    2121 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token o8s94c.vtet3poret11vbei \
	I0610 07:10:56.807491    2121 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f81669ad7d2f234b34c57c88f17d06eff870ea4064b7e3e4d3b3eb3883ffeaf2 \
	I0610 07:10:56.807508    2121 kubeadm.go:322]     --control-plane 
	I0610 07:10:56.807514    2121 kubeadm.go:322] 
	I0610 07:10:56.807601    2121 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0610 07:10:56.807608    2121 kubeadm.go:322] 
	I0610 07:10:56.807672    2121 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token o8s94c.vtet3poret11vbei \
	I0610 07:10:56.807757    2121 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:f81669ad7d2f234b34c57c88f17d06eff870ea4064b7e3e4d3b3eb3883ffeaf2 
	I0610 07:10:56.807996    2121 kubeadm.go:322] W0610 14:10:42.862700    1609 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0610 07:10:56.808128    2121 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0610 07:10:56.808241    2121 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 24.0.2. Latest validated version: 19.03
	I0610 07:10:56.808322    2121 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0610 07:10:56.808414    2121 kubeadm.go:322] W0610 14:10:44.247937    1609 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0610 07:10:56.808514    2121 kubeadm.go:322] W0610 14:10:44.248348    1609 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0610 07:10:56.808522    2121 cni.go:84] Creating CNI manager for ""
	I0610 07:10:56.808531    2121 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0610 07:10:56.808542    2121 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0610 07:10:56.808634    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:10:56.808636    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.30.1 minikube.k8s.io/commit=3273891fc7fc0f39c65075197baa2d52fc489f6f minikube.k8s.io/name=ingress-addon-legacy-433000 minikube.k8s.io/updated_at=2023_06_10T07_10_56_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:10:56.812956    2121 ops.go:34] apiserver oom_adj: -16
	I0610 07:10:56.942184    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:10:57.478454    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:10:57.978502    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:10:58.478233    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:10:58.978441    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:10:59.478354    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:10:59.978356    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:11:00.478252    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:11:00.978189    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:11:01.478372    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:11:01.978017    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:11:02.478360    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:11:02.978245    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:11:03.478051    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:11:03.978313    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:11:04.478217    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:11:04.978133    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:11:05.478181    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:11:05.978156    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:11:06.478148    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:11:06.978109    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:11:07.478149    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:11:07.978109    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:11:08.478141    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:11:08.978080    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:11:09.478173    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:11:09.978034    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:11:10.478069    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:11:10.978058    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:11:11.478022    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:11:11.977995    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:11:12.477944    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:11:12.977718    2121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0610 07:11:13.019031    2121 kubeadm.go:1076] duration metric: took 16.211016792s to wait for elevateKubeSystemPrivileges.
	I0610 07:11:13.019045    2121 kubeadm.go:406] StartCluster complete in 30.610159625s
	I0610 07:11:13.019054    2121 settings.go:142] acquiring lock: {Name:mk4cd069708b06d9de03f9b5393c32ff96cdd016 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:11:13.019138    2121 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:11:13.019690    2121 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/kubeconfig: {Name:mkac2e0f9c3956b550c91557119bdbcf28863bf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:11:13.019858    2121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0610 07:11:13.019903    2121 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0610 07:11:13.019946    2121 addons.go:66] Setting storage-provisioner=true in profile "ingress-addon-legacy-433000"
	I0610 07:11:13.019955    2121 addons.go:228] Setting addon storage-provisioner=true in "ingress-addon-legacy-433000"
	I0610 07:11:13.019975    2121 addons.go:66] Setting default-storageclass=true in profile "ingress-addon-legacy-433000"
	I0610 07:11:13.019983    2121 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-433000"
	I0610 07:11:13.019984    2121 host.go:66] Checking if "ingress-addon-legacy-433000" exists ...
	I0610 07:11:13.020124    2121 config.go:182] Loaded profile config "ingress-addon-legacy-433000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0610 07:11:13.020104    2121 kapi.go:59] client config for ingress-addon-legacy-433000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/client.key", CAFile:"/Users/jenkins/minikube-integration/15074-894/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101d7f510), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 07:11:13.020569    2121 cert_rotation.go:137] Starting client certificate rotation controller
	I0610 07:11:13.021143    2121 kapi.go:59] client config for ingress-addon-legacy-433000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/client.key", CAFile:"/Users/jenkins/minikube-integration/15074-894/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101d7f510), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 07:11:13.025467    2121 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 07:11:13.029504    2121 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 07:11:13.029514    2121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0610 07:11:13.029523    2121 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/ingress-addon-legacy-433000/id_rsa Username:docker}
	I0610 07:11:13.035709    2121 addons.go:228] Setting addon default-storageclass=true in "ingress-addon-legacy-433000"
	I0610 07:11:13.035735    2121 host.go:66] Checking if "ingress-addon-legacy-433000" exists ...
	I0610 07:11:13.036474    2121 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0610 07:11:13.036481    2121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0610 07:11:13.036487    2121 sshutil.go:53] new ssh client: &{IP:192.168.105.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/ingress-addon-legacy-433000/id_rsa Username:docker}
	I0610 07:11:13.074600    2121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0610 07:11:13.097541    2121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0610 07:11:13.156257    2121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0610 07:11:13.283067    2121 start.go:916] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0610 07:11:13.335014    2121 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0610 07:11:13.342866    2121 addons.go:499] enable addons completed in 322.983709ms: enabled=[storage-provisioner default-storageclass]
	I0610 07:11:13.549055    2121 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-433000" context rescaled to 1 replicas
	I0610 07:11:13.549086    2121 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.105.6 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:11:13.553442    2121 out.go:177] * Verifying Kubernetes components...
	I0610 07:11:13.561352    2121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 07:11:13.582703    2121 kapi.go:59] client config for ingress-addon-legacy-433000: &rest.Config{Host:"https://192.168.105.6:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/client.key", CAFile:"/Users/jenkins/minikube-integration/15074-894/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(n
il), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101d7f510), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0610 07:11:13.582881    2121 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-433000" to be "Ready" ...
	I0610 07:11:13.586822    2121 node_ready.go:49] node "ingress-addon-legacy-433000" has status "Ready":"True"
	I0610 07:11:13.586836    2121 node_ready.go:38] duration metric: took 3.9455ms waiting for node "ingress-addon-legacy-433000" to be "Ready" ...
	I0610 07:11:13.586841    2121 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 07:11:13.593386    2121 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-t9tnx" in "kube-system" namespace to be "Ready" ...
	I0610 07:11:15.614222    2121 pod_ready.go:102] pod "coredns-66bff467f8-t9tnx" in "kube-system" namespace has status "Ready":"False"
	I0610 07:11:17.622013    2121 pod_ready.go:102] pod "coredns-66bff467f8-t9tnx" in "kube-system" namespace has status "Ready":"False"
	I0610 07:11:19.622111    2121 pod_ready.go:102] pod "coredns-66bff467f8-t9tnx" in "kube-system" namespace has status "Ready":"False"
	I0610 07:11:21.622496    2121 pod_ready.go:102] pod "coredns-66bff467f8-t9tnx" in "kube-system" namespace has status "Ready":"False"
	I0610 07:11:22.617780    2121 pod_ready.go:92] pod "coredns-66bff467f8-t9tnx" in "kube-system" namespace has status "Ready":"True"
	I0610 07:11:22.617804    2121 pod_ready.go:81] duration metric: took 9.024708292s waiting for pod "coredns-66bff467f8-t9tnx" in "kube-system" namespace to be "Ready" ...
	I0610 07:11:22.617817    2121 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-433000" in "kube-system" namespace to be "Ready" ...
	I0610 07:11:22.629809    2121 pod_ready.go:92] pod "etcd-ingress-addon-legacy-433000" in "kube-system" namespace has status "Ready":"True"
	I0610 07:11:22.629830    2121 pod_ready.go:81] duration metric: took 12.00325ms waiting for pod "etcd-ingress-addon-legacy-433000" in "kube-system" namespace to be "Ready" ...
	I0610 07:11:22.629841    2121 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-433000" in "kube-system" namespace to be "Ready" ...
	I0610 07:11:22.635523    2121 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-433000" in "kube-system" namespace has status "Ready":"True"
	I0610 07:11:22.635536    2121 pod_ready.go:81] duration metric: took 5.688417ms waiting for pod "kube-apiserver-ingress-addon-legacy-433000" in "kube-system" namespace to be "Ready" ...
	I0610 07:11:22.635560    2121 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-433000" in "kube-system" namespace to be "Ready" ...
	I0610 07:11:22.641795    2121 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-433000" in "kube-system" namespace has status "Ready":"True"
	I0610 07:11:22.641807    2121 pod_ready.go:81] duration metric: took 6.238708ms waiting for pod "kube-controller-manager-ingress-addon-legacy-433000" in "kube-system" namespace to be "Ready" ...
	I0610 07:11:22.641817    2121 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9vs9z" in "kube-system" namespace to be "Ready" ...
	I0610 07:11:22.645496    2121 pod_ready.go:92] pod "kube-proxy-9vs9z" in "kube-system" namespace has status "Ready":"True"
	I0610 07:11:22.645508    2121 pod_ready.go:81] duration metric: took 3.684958ms waiting for pod "kube-proxy-9vs9z" in "kube-system" namespace to be "Ready" ...
	I0610 07:11:22.645515    2121 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-433000" in "kube-system" namespace to be "Ready" ...
	I0610 07:11:22.809823    2121 request.go:628] Waited for 164.252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-433000
	I0610 07:11:23.009837    2121 request.go:628] Waited for 196.688208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes/ingress-addon-legacy-433000
	I0610 07:11:23.016015    2121 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-433000" in "kube-system" namespace has status "Ready":"True"
	I0610 07:11:23.016046    2121 pod_ready.go:81] duration metric: took 370.533916ms waiting for pod "kube-scheduler-ingress-addon-legacy-433000" in "kube-system" namespace to be "Ready" ...
	I0610 07:11:23.016074    2121 pod_ready.go:38] duration metric: took 9.429540375s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0610 07:11:23.016151    2121 api_server.go:52] waiting for apiserver process to appear ...
	I0610 07:11:23.016502    2121 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0610 07:11:23.034291    2121 api_server.go:72] duration metric: took 9.485497042s to wait for apiserver process to appear ...
	I0610 07:11:23.034317    2121 api_server.go:88] waiting for apiserver healthz status ...
	I0610 07:11:23.034335    2121 api_server.go:253] Checking apiserver healthz at https://192.168.105.6:8443/healthz ...
	I0610 07:11:23.044127    2121 api_server.go:279] https://192.168.105.6:8443/healthz returned 200:
	ok
	I0610 07:11:23.045311    2121 api_server.go:141] control plane version: v1.18.20
	I0610 07:11:23.045327    2121 api_server.go:131] duration metric: took 11.001416ms to wait for apiserver health ...
	I0610 07:11:23.045335    2121 system_pods.go:43] waiting for kube-system pods to appear ...
	I0610 07:11:23.209833    2121 request.go:628] Waited for 164.43325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0610 07:11:23.223444    2121 system_pods.go:59] 7 kube-system pods found
	I0610 07:11:23.223485    2121 system_pods.go:61] "coredns-66bff467f8-t9tnx" [deb406aa-323e-47a9-b5a5-0a5b5e37cb54] Running
	I0610 07:11:23.223495    2121 system_pods.go:61] "etcd-ingress-addon-legacy-433000" [0674dc32-4379-414e-83d7-de31470d10f1] Running
	I0610 07:11:23.223506    2121 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-433000" [7b5b136a-6ea4-4496-91d7-26cb76d74ca0] Running
	I0610 07:11:23.223520    2121 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-433000" [3ef1d344-acd7-4650-aec2-cff4608b2540] Running
	I0610 07:11:23.223538    2121 system_pods.go:61] "kube-proxy-9vs9z" [6457737c-b4ac-4546-85c0-78dece5b29ca] Running
	I0610 07:11:23.223552    2121 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-433000" [881517bb-351e-48ef-967e-6d550cbc9eb4] Running
	I0610 07:11:23.223564    2121 system_pods.go:61] "storage-provisioner" [b4ba9385-7d98-4200-af1b-b5ca318f9ab3] Running
	I0610 07:11:23.223572    2121 system_pods.go:74] duration metric: took 178.236584ms to wait for pod list to return data ...
	I0610 07:11:23.223587    2121 default_sa.go:34] waiting for default service account to be created ...
	I0610 07:11:23.409824    2121 request.go:628] Waited for 186.108042ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/default/serviceaccounts
	I0610 07:11:23.412763    2121 default_sa.go:45] found service account: "default"
	I0610 07:11:23.412777    2121 default_sa.go:55] duration metric: took 189.190042ms for default service account to be created ...
	I0610 07:11:23.412783    2121 system_pods.go:116] waiting for k8s-apps to be running ...
	I0610 07:11:23.609812    2121 request.go:628] Waited for 196.965208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/namespaces/kube-system/pods
	I0610 07:11:23.624034    2121 system_pods.go:86] 7 kube-system pods found
	I0610 07:11:23.624069    2121 system_pods.go:89] "coredns-66bff467f8-t9tnx" [deb406aa-323e-47a9-b5a5-0a5b5e37cb54] Running
	I0610 07:11:23.624078    2121 system_pods.go:89] "etcd-ingress-addon-legacy-433000" [0674dc32-4379-414e-83d7-de31470d10f1] Running
	I0610 07:11:23.624086    2121 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-433000" [7b5b136a-6ea4-4496-91d7-26cb76d74ca0] Running
	I0610 07:11:23.624093    2121 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-433000" [3ef1d344-acd7-4650-aec2-cff4608b2540] Running
	I0610 07:11:23.624101    2121 system_pods.go:89] "kube-proxy-9vs9z" [6457737c-b4ac-4546-85c0-78dece5b29ca] Running
	I0610 07:11:23.624113    2121 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-433000" [881517bb-351e-48ef-967e-6d550cbc9eb4] Running
	I0610 07:11:23.624125    2121 system_pods.go:89] "storage-provisioner" [b4ba9385-7d98-4200-af1b-b5ca318f9ab3] Running
	I0610 07:11:23.624137    2121 system_pods.go:126] duration metric: took 211.351625ms to wait for k8s-apps to be running ...
	I0610 07:11:23.624149    2121 system_svc.go:44] waiting for kubelet service to be running ....
	I0610 07:11:23.624378    2121 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0610 07:11:23.640250    2121 system_svc.go:56] duration metric: took 16.099875ms WaitForService to wait for kubelet.
	I0610 07:11:23.640268    2121 kubeadm.go:581] duration metric: took 10.091495708s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0610 07:11:23.640290    2121 node_conditions.go:102] verifying NodePressure condition ...
	I0610 07:11:23.809809    2121 request.go:628] Waited for 169.452417ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.105.6:8443/api/v1/nodes
	I0610 07:11:23.818270    2121 node_conditions.go:122] node storage ephemeral capacity is 17784760Ki
	I0610 07:11:23.818340    2121 node_conditions.go:123] node cpu capacity is 2
	I0610 07:11:23.818370    2121 node_conditions.go:105] duration metric: took 178.075084ms to run NodePressure ...
	I0610 07:11:23.818397    2121 start.go:228] waiting for startup goroutines ...
	I0610 07:11:23.818415    2121 start.go:233] waiting for cluster config update ...
	I0610 07:11:23.818455    2121 start.go:242] writing updated cluster config ...
	I0610 07:11:23.820012    2121 ssh_runner.go:195] Run: rm -f paused
	I0610 07:11:23.969359    2121 start.go:573] kubectl: 1.25.9, cluster: 1.18.20 (minor skew: 7)
	I0610 07:11:23.972965    2121 out.go:177] 
	W0610 07:11:23.974624    2121 out.go:239] ! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.18.20.
	I0610 07:11:23.978790    2121 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0610 07:11:23.986835    2121 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-433000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Journal begins at Sat 2023-06-10 14:10:24 UTC, ends at Sat 2023-06-10 14:12:40 UTC. --
	Jun 10 14:12:12 ingress-addon-legacy-433000 dockerd[1287]: time="2023-06-10T14:12:12.160401019Z" level=warning msg="cleaning up after shim disconnected" id=792aea3128195415622409d8801668ea85d58a6e1822d063741ae86337d1fd10 namespace=moby
	Jun 10 14:12:12 ingress-addon-legacy-433000 dockerd[1287]: time="2023-06-10T14:12:12.160406394Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 14:12:12 ingress-addon-legacy-433000 dockerd[1281]: time="2023-06-10T14:12:12.160498894Z" level=info msg="ignoring event" container=792aea3128195415622409d8801668ea85d58a6e1822d063741ae86337d1fd10 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 14:12:25 ingress-addon-legacy-433000 dockerd[1281]: time="2023-06-10T14:12:25.434284940Z" level=info msg="ignoring event" container=47a3bbdc6f1647c03bcf98764056340a97ecd1210a8d108710575258a52e601a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 14:12:25 ingress-addon-legacy-433000 dockerd[1287]: time="2023-06-10T14:12:25.434270773Z" level=info msg="shim disconnected" id=47a3bbdc6f1647c03bcf98764056340a97ecd1210a8d108710575258a52e601a namespace=moby
	Jun 10 14:12:25 ingress-addon-legacy-433000 dockerd[1287]: time="2023-06-10T14:12:25.434359023Z" level=warning msg="cleaning up after shim disconnected" id=47a3bbdc6f1647c03bcf98764056340a97ecd1210a8d108710575258a52e601a namespace=moby
	Jun 10 14:12:25 ingress-addon-legacy-433000 dockerd[1287]: time="2023-06-10T14:12:25.434366231Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 14:12:29 ingress-addon-legacy-433000 dockerd[1287]: time="2023-06-10T14:12:29.424766514Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jun 10 14:12:29 ingress-addon-legacy-433000 dockerd[1287]: time="2023-06-10T14:12:29.424810722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 14:12:29 ingress-addon-legacy-433000 dockerd[1287]: time="2023-06-10T14:12:29.424819681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jun 10 14:12:29 ingress-addon-legacy-433000 dockerd[1287]: time="2023-06-10T14:12:29.424825181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jun 10 14:12:29 ingress-addon-legacy-433000 dockerd[1281]: time="2023-06-10T14:12:29.462098652Z" level=info msg="ignoring event" container=2e6e9784ea6c16c04af64c057d9e3632861c5583a1350ec89c8da5d228929475 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 14:12:29 ingress-addon-legacy-433000 dockerd[1287]: time="2023-06-10T14:12:29.462321819Z" level=info msg="shim disconnected" id=2e6e9784ea6c16c04af64c057d9e3632861c5583a1350ec89c8da5d228929475 namespace=moby
	Jun 10 14:12:29 ingress-addon-legacy-433000 dockerd[1287]: time="2023-06-10T14:12:29.462355236Z" level=warning msg="cleaning up after shim disconnected" id=2e6e9784ea6c16c04af64c057d9e3632861c5583a1350ec89c8da5d228929475 namespace=moby
	Jun 10 14:12:29 ingress-addon-legacy-433000 dockerd[1287]: time="2023-06-10T14:12:29.462359569Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 14:12:35 ingress-addon-legacy-433000 dockerd[1281]: time="2023-06-10T14:12:35.861282341Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=df2d8d4c75c85ef823df36de55fd644b4a2b4ea14d3f5569024d282a35210961
	Jun 10 14:12:35 ingress-addon-legacy-433000 dockerd[1281]: time="2023-06-10T14:12:35.867307452Z" level=info msg="Container failed to exit within 2s of signal 15 - using the force" container=df2d8d4c75c85ef823df36de55fd644b4a2b4ea14d3f5569024d282a35210961
	Jun 10 14:12:35 ingress-addon-legacy-433000 dockerd[1287]: time="2023-06-10T14:12:35.963418724Z" level=info msg="shim disconnected" id=df2d8d4c75c85ef823df36de55fd644b4a2b4ea14d3f5569024d282a35210961 namespace=moby
	Jun 10 14:12:35 ingress-addon-legacy-433000 dockerd[1287]: time="2023-06-10T14:12:35.963475474Z" level=warning msg="cleaning up after shim disconnected" id=df2d8d4c75c85ef823df36de55fd644b4a2b4ea14d3f5569024d282a35210961 namespace=moby
	Jun 10 14:12:35 ingress-addon-legacy-433000 dockerd[1287]: time="2023-06-10T14:12:35.963481224Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 14:12:35 ingress-addon-legacy-433000 dockerd[1281]: time="2023-06-10T14:12:35.963809598Z" level=info msg="ignoring event" container=df2d8d4c75c85ef823df36de55fd644b4a2b4ea14d3f5569024d282a35210961 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 10 14:12:35 ingress-addon-legacy-433000 dockerd[1287]: time="2023-06-10T14:12:35.996341230Z" level=info msg="shim disconnected" id=447c2b08741076b472386812a4248fa8c340897e927497276e7878f8c0a32026 namespace=moby
	Jun 10 14:12:35 ingress-addon-legacy-433000 dockerd[1287]: time="2023-06-10T14:12:35.996371146Z" level=warning msg="cleaning up after shim disconnected" id=447c2b08741076b472386812a4248fa8c340897e927497276e7878f8c0a32026 namespace=moby
	Jun 10 14:12:35 ingress-addon-legacy-433000 dockerd[1287]: time="2023-06-10T14:12:35.996375771Z" level=info msg="cleaning up dead shim" namespace=moby
	Jun 10 14:12:35 ingress-addon-legacy-433000 dockerd[1281]: time="2023-06-10T14:12:35.996492729Z" level=info msg="ignoring event" container=447c2b08741076b472386812a4248fa8c340897e927497276e7878f8c0a32026 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID
	2e6e9784ea6c1       13753a81eccfd                                                                                                      11 seconds ago       Exited              hello-world-app           2                   9dc1e4bf325a8
	5e440ae049e6f       nginx@sha256:2e776a66a3556f001aba13431b26e448fe8acba277bf93d2ab1a785571a46d90                                      38 seconds ago       Running             nginx                     0                   4331cf34ad5bb
	df2d8d4c75c85       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   56 seconds ago       Exited              controller                0                   447c2b0874107
	a560780c88615       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              patch                     0                   b8693d8139436
	6be8908ab2af6       jettech/kube-webhook-certgen@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7               About a minute ago   Exited              create                    0                   7142b4ef1effc
	09a3aa184b2c0       gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944    About a minute ago   Running             storage-provisioner       0                   32372a6019080
	92cc60e22b02c       6e17ba78cf3eb                                                                                                      About a minute ago   Running             coredns                   0                   907edf591750a
	c251ddce40737       565297bc6f7d4                                                                                                      About a minute ago   Running             kube-proxy                0                   08133aab7c772
	e888f1785b5f0       2694cf044d665                                                                                                      About a minute ago   Running             kube-apiserver            0                   6e27d134830d0
	8130e2a2d091b       ab707b0a0ea33                                                                                                      About a minute ago   Running             etcd                      0                   bad95c570afc8
	46be6b6adb08b       68a4fac29a865                                                                                                      About a minute ago   Running             kube-controller-manager   0                   8db5f23f24f42
	5c4defdef4e30       095f37015706d                                                                                                      About a minute ago   Running             kube-scheduler            0                   cd6f955413f2c
	
	* 
	* ==> coredns [92cc60e22b02] <==
	* [INFO] 172.17.0.1:65364 - 4374 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000033458s
	[INFO] 172.17.0.1:65364 - 16718 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00056275s
	[INFO] 172.17.0.1:38180 - 38651 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.001226334s
	[INFO] 172.17.0.1:65364 - 27090 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000462041s
	[INFO] 172.17.0.1:65364 - 59167 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000041542s
	[INFO] 172.17.0.1:38180 - 1220 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000015875s
	[INFO] 172.17.0.1:38180 - 48688 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000011292s
	[INFO] 172.17.0.1:65364 - 34708 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000054917s
	[INFO] 172.17.0.1:38180 - 23573 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00013375s
	[INFO] 172.17.0.1:38180 - 25629 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000081292s
	[INFO] 172.17.0.1:38180 - 54442 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000033125s
	[INFO] 172.17.0.1:47350 - 41089 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000024875s
	[INFO] 172.17.0.1:29862 - 33485 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000010541s
	[INFO] 172.17.0.1:29862 - 36608 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.0000195s
	[INFO] 172.17.0.1:29862 - 46736 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000011167s
	[INFO] 172.17.0.1:29862 - 29022 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000009583s
	[INFO] 172.17.0.1:29862 - 14478 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000011917s
	[INFO] 172.17.0.1:47350 - 33502 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000009292s
	[INFO] 172.17.0.1:47350 - 24512 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000010833s
	[INFO] 172.17.0.1:47350 - 4186 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000008041s
	[INFO] 172.17.0.1:47350 - 62144 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000008792s
	[INFO] 172.17.0.1:47350 - 55212 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00000825s
	[INFO] 172.17.0.1:47350 - 12287 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000009459s
	[INFO] 172.17.0.1:29862 - 53343 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000012375s
	[INFO] 172.17.0.1:29862 - 56721 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000009041s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-433000
	Roles:              master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ingress-addon-legacy-433000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3273891fc7fc0f39c65075197baa2d52fc489f6f
	                    minikube.k8s.io/name=ingress-addon-legacy-433000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_06_10T07_10_56_0700
	                    minikube.k8s.io/version=v1.30.1
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jun 2023 14:10:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-433000
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jun 2023 14:12:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jun 2023 14:12:33 +0000   Sat, 10 Jun 2023 14:10:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jun 2023 14:12:33 +0000   Sat, 10 Jun 2023 14:10:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jun 2023 14:12:33 +0000   Sat, 10 Jun 2023 14:10:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jun 2023 14:12:33 +0000   Sat, 10 Jun 2023 14:11:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.6
	  Hostname:    ingress-addon-legacy-433000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003892Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             4003892Ki
	  pods:               110
	System Info:
	  Machine ID:                 736ab64107554b15a951e5df4409d021
	  System UUID:                736ab64107554b15a951e5df4409d021
	  Boot ID:                    b6328885-0764-40c8-9781-fc2d9fe40ff6
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://24.0.2
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-g5jg6                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	  kube-system                 coredns-66bff467f8-t9tnx                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     87s
	  kube-system                 etcd-ingress-addon-legacy-433000                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 kube-apiserver-ingress-addon-legacy-433000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-433000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 kube-proxy-9vs9z                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	  kube-system                 kube-scheduler-ingress-addon-legacy-433000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         97s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 110s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  110s (x3 over 110s)  kubelet     Node ingress-addon-legacy-433000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s (x3 over 110s)  kubelet     Node ingress-addon-legacy-433000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s (x3 over 110s)  kubelet     Node ingress-addon-legacy-433000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  110s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 97s                  kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  97s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  97s                  kubelet     Node ingress-addon-legacy-433000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    97s                  kubelet     Node ingress-addon-legacy-433000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     97s                  kubelet     Node ingress-addon-legacy-433000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                97s                  kubelet     Node ingress-addon-legacy-433000 status is now: NodeReady
	  Normal  Starting                 87s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Jun10 14:10] ACPI: SRAT not present
	[  +0.000000] KASLR disabled due to lack of seed
	[  +0.629064] EINJ: EINJ table not found.
	[  +0.508693] systemd-fstab-generator[115]: Ignoring "noauto" for root device
	[  +0.043871] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000870] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.272233] systemd-fstab-generator[482]: Ignoring "noauto" for root device
	[  +0.072434] systemd-fstab-generator[493]: Ignoring "noauto" for root device
	[  +3.517379] systemd-fstab-generator[798]: Ignoring "noauto" for root device
	[  +1.599840] systemd-fstab-generator[971]: Ignoring "noauto" for root device
	[  +0.182068] systemd-fstab-generator[1027]: Ignoring "noauto" for root device
	[  +0.080887] systemd-fstab-generator[1038]: Ignoring "noauto" for root device
	[  +0.086851] systemd-fstab-generator[1051]: Ignoring "noauto" for root device
	[  +1.150826] kauditd_printk_skb: 53 callbacks suppressed
	[  +3.374359] systemd-fstab-generator[1256]: Ignoring "noauto" for root device
	[  +1.633688] kauditd_printk_skb: 56 callbacks suppressed
	[  +3.040542] systemd-fstab-generator[1725]: Ignoring "noauto" for root device
	[  +7.576278] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.087952] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +6.177865] systemd-fstab-generator[2813]: Ignoring "noauto" for root device
	[Jun10 14:11] kauditd_printk_skb: 7 callbacks suppressed
	[  +8.654922] kauditd_printk_skb: 9 callbacks suppressed
	[  +7.171704] TCP: eth0: Driver has suspect GRO implementation, TCP performance may be compromised.
	[Jun10 14:12] kauditd_printk_skb: 1 callbacks suppressed
	
	* 
	* ==> etcd [8130e2a2d091] <==
	* raft2023/06/10 14:10:51 INFO: ed054832bd1917e1 became follower at term 0
	raft2023/06/10 14:10:51 INFO: newRaft ed054832bd1917e1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/06/10 14:10:51 INFO: ed054832bd1917e1 became follower at term 1
	raft2023/06/10 14:10:51 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-06-10 14:10:51.673838 W | auth: simple token is not cryptographically signed
	2023-06-10 14:10:51.674573 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	raft2023/06/10 14:10:51 INFO: ed054832bd1917e1 switched to configuration voters=(17079136544630577121)
	2023-06-10 14:10:51.675915 I | etcdserver: ed054832bd1917e1 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-06-10 14:10:51.675996 I | etcdserver/membership: added member ed054832bd1917e1 [https://192.168.105.6:2380] to cluster 45a39c2c59b0edf4
	2023-06-10 14:10:51.676580 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-06-10 14:10:51.676674 I | embed: listening for peers on 192.168.105.6:2380
	2023-06-10 14:10:51.676710 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/06/10 14:10:52 INFO: ed054832bd1917e1 is starting a new election at term 1
	raft2023/06/10 14:10:52 INFO: ed054832bd1917e1 became candidate at term 2
	raft2023/06/10 14:10:52 INFO: ed054832bd1917e1 received MsgVoteResp from ed054832bd1917e1 at term 2
	raft2023/06/10 14:10:52 INFO: ed054832bd1917e1 became leader at term 2
	raft2023/06/10 14:10:52 INFO: raft.node: ed054832bd1917e1 elected leader ed054832bd1917e1 at term 2
	2023-06-10 14:10:52.073168 I | etcdserver: published {Name:ingress-addon-legacy-433000 ClientURLs:[https://192.168.105.6:2379]} to cluster 45a39c2c59b0edf4
	2023-06-10 14:10:52.073189 I | embed: ready to serve client requests
	2023-06-10 14:10:52.073826 I | embed: serving client requests on 127.0.0.1:2379
	2023-06-10 14:10:52.073875 I | embed: ready to serve client requests
	2023-06-10 14:10:52.074305 I | embed: serving client requests on 192.168.105.6:2379
	2023-06-10 14:10:52.080084 I | etcdserver: setting up the initial cluster version to 3.4
	2023-06-10 14:10:52.082569 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-06-10 14:10:52.082621 I | etcdserver/api: enabled capabilities for version 3.4
	
	* 
	* ==> kernel <==
	*  14:12:40 up 2 min,  0 users,  load average: 0.58, 0.28, 0.10
	Linux ingress-addon-legacy-433000 5.10.57 #1 SMP PREEMPT Wed Jun 7 01:52:34 UTC 2023 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [e888f1785b5f] <==
	* E0610 14:10:54.179331       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.105.6, ResourceVersion: 0, AdditionalErrorMsg: 
	I0610 14:10:54.252408       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0610 14:10:54.252470       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0610 14:10:54.252488       1 cache.go:39] Caches are synced for autoregister controller
	I0610 14:10:54.252628       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0610 14:10:54.252652       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0610 14:10:55.151624       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0610 14:10:55.152068       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0610 14:10:55.166755       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0610 14:10:55.172855       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0610 14:10:55.172898       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0610 14:10:55.310605       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0610 14:10:55.323604       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0610 14:10:55.400893       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.105.6]
	I0610 14:10:55.401302       1 controller.go:609] quota admission added evaluator for: endpoints
	I0610 14:10:55.402606       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0610 14:10:56.438013       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0610 14:10:57.010200       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0610 14:10:57.214390       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0610 14:11:03.353375       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0610 14:11:13.438570       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0610 14:11:13.709856       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0610 14:11:24.333555       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0610 14:11:59.174399       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0610 14:12:33.862658       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [46be6b6adb08] <==
	* I0610 14:11:13.473321       1 shared_informer.go:230] Caches are synced for attach detach 
	I0610 14:11:13.484358       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0610 14:11:13.487686       1 shared_informer.go:230] Caches are synced for TTL 
	I0610 14:11:13.488637       1 shared_informer.go:230] Caches are synced for GC 
	I0610 14:11:13.535417       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I0610 14:11:13.535574       1 shared_informer.go:230] Caches are synced for endpoint 
	I0610 14:11:13.672268       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0610 14:11:13.686266       1 shared_informer.go:230] Caches are synced for disruption 
	I0610 14:11:13.686276       1 disruption.go:339] Sending events to api server.
	I0610 14:11:13.692895       1 shared_informer.go:230] Caches are synced for resource quota 
	I0610 14:11:13.692950       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0610 14:11:13.693026       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0610 14:11:13.707768       1 shared_informer.go:230] Caches are synced for deployment 
	I0610 14:11:13.718179       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"9431c80d-765b-4a4f-840d-96186ddae4d9", APIVersion:"apps/v1", ResourceVersion:"323", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I0610 14:11:13.724187       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"7f0ab66a-e6cf-49ea-9924-f08f2ba76aa9", APIVersion:"apps/v1", ResourceVersion:"355", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-t9tnx
	I0610 14:11:13.766278       1 shared_informer.go:230] Caches are synced for resource quota 
	I0610 14:11:24.327514       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"2e08d54f-70a0-452e-9821-8a81f473035d", APIVersion:"apps/v1", ResourceVersion:"431", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0610 14:11:24.341976       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"8a2cfbc6-d9ab-4ece-ad6d-bd7a4843e699", APIVersion:"batch/v1", ResourceVersion:"435", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-jvt24
	I0610 14:11:24.343777       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"cbef9587-e46a-4529-8252-397a33630c2b", APIVersion:"apps/v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-ttdgk
	I0610 14:11:24.367587       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"6379bcad-20d7-4986-95a4-0edc6bee55dd", APIVersion:"batch/v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-lw2k8
	I0610 14:11:27.575774       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"6379bcad-20d7-4986-95a4-0edc6bee55dd", APIVersion:"batch/v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0610 14:11:27.588776       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"8a2cfbc6-d9ab-4ece-ad6d-bd7a4843e699", APIVersion:"batch/v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0610 14:12:08.470940       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"d93bac39-fc89-49e5-994b-f242f0ea7fad", APIVersion:"apps/v1", ResourceVersion:"574", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0610 14:12:08.478346       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"eecfb387-2c05-4c2c-aff8-1484eb1fa24f", APIVersion:"apps/v1", ResourceVersion:"575", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-g5jg6
	E0610 14:12:38.637400       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-gk6q4" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [c251ddce4073] <==
	* W0610 14:11:13.970086       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0610 14:11:13.974065       1 node.go:136] Successfully retrieved node IP: 192.168.105.6
	I0610 14:11:13.974082       1 server_others.go:186] Using iptables Proxier.
	I0610 14:11:13.974227       1 server.go:583] Version: v1.18.20
	I0610 14:11:13.976415       1 config.go:315] Starting service config controller
	I0610 14:11:13.976451       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0610 14:11:13.976476       1 config.go:133] Starting endpoints config controller
	I0610 14:11:13.976495       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0610 14:11:14.076598       1 shared_informer.go:230] Caches are synced for service config 
	I0610 14:11:14.076659       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [5c4defdef4e3] <==
	* I0610 14:10:54.181432       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0610 14:10:54.181457       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0610 14:10:54.185355       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0610 14:10:54.185429       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0610 14:10:54.185493       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0610 14:10:54.185546       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 14:10:54.185629       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0610 14:10:54.185665       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 14:10:54.185709       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0610 14:10:54.185741       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0610 14:10:54.185780       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 14:10:54.185812       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0610 14:10:54.185887       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 14:10:54.186228       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0610 14:10:55.004061       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0610 14:10:55.006144       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0610 14:10:55.009823       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0610 14:10:55.067818       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0610 14:10:55.131953       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0610 14:10:55.165232       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0610 14:10:55.234100       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0610 14:10:55.247886       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0610 14:10:55.254121       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0610 14:10:58.181459       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0610 14:11:13.388992       1 factory.go:503] pod: kube-system/storage-provisioner is already present in the active queue
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Sat 2023-06-10 14:10:24 UTC, ends at Sat 2023-06-10 14:12:41 UTC. --
	Jun 10 14:12:14 ingress-addon-legacy-433000 kubelet[2819]: I0610 14:12:14.113619    2819 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 792aea3128195415622409d8801668ea85d58a6e1822d063741ae86337d1fd10
	Jun 10 14:12:14 ingress-addon-legacy-433000 kubelet[2819]: E0610 14:12:14.113970    2819 pod_workers.go:191] Error syncing pod 55156b3a-02ea-4d16-a3b1-38b4c28b36bd ("hello-world-app-5f5d8b66bb-g5jg6_default(55156b3a-02ea-4d16-a3b1-38b4c28b36bd)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 10s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-g5jg6_default(55156b3a-02ea-4d16-a3b1-38b4c28b36bd)"
	Jun 10 14:12:21 ingress-addon-legacy-433000 kubelet[2819]: I0610 14:12:21.366037    2819 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 42ec3ff4d1457b8ba312678add15a4627d87aa69499e2efbc3537fce368eaa97
	Jun 10 14:12:21 ingress-addon-legacy-433000 kubelet[2819]: E0610 14:12:21.371060    2819 pod_workers.go:191] Error syncing pod 25822bc0-24a2-4633-9581-892efa91c2ba ("kube-ingress-dns-minikube_kube-system(25822bc0-24a2-4633-9581-892efa91c2ba)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with CrashLoopBackOff: "back-off 20s restarting failed container=minikube-ingress-dns pod=kube-ingress-dns-minikube_kube-system(25822bc0-24a2-4633-9581-892efa91c2ba)"
	Jun 10 14:12:23 ingress-addon-legacy-433000 kubelet[2819]: I0610 14:12:23.923228    2819 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-td8qj" (UniqueName: "kubernetes.io/secret/25822bc0-24a2-4633-9581-892efa91c2ba-minikube-ingress-dns-token-td8qj") pod "25822bc0-24a2-4633-9581-892efa91c2ba" (UID: "25822bc0-24a2-4633-9581-892efa91c2ba")
	Jun 10 14:12:23 ingress-addon-legacy-433000 kubelet[2819]: I0610 14:12:23.925212    2819 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25822bc0-24a2-4633-9581-892efa91c2ba-minikube-ingress-dns-token-td8qj" (OuterVolumeSpecName: "minikube-ingress-dns-token-td8qj") pod "25822bc0-24a2-4633-9581-892efa91c2ba" (UID: "25822bc0-24a2-4633-9581-892efa91c2ba"). InnerVolumeSpecName "minikube-ingress-dns-token-td8qj". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 10 14:12:24 ingress-addon-legacy-433000 kubelet[2819]: I0610 14:12:24.024659    2819 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-td8qj" (UniqueName: "kubernetes.io/secret/25822bc0-24a2-4633-9581-892efa91c2ba-minikube-ingress-dns-token-td8qj") on node "ingress-addon-legacy-433000" DevicePath ""
	Jun 10 14:12:26 ingress-addon-legacy-433000 kubelet[2819]: I0610 14:12:26.310061    2819 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 42ec3ff4d1457b8ba312678add15a4627d87aa69499e2efbc3537fce368eaa97
	Jun 10 14:12:29 ingress-addon-legacy-433000 kubelet[2819]: I0610 14:12:29.365794    2819 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 792aea3128195415622409d8801668ea85d58a6e1822d063741ae86337d1fd10
	Jun 10 14:12:29 ingress-addon-legacy-433000 kubelet[2819]: W0610 14:12:29.475805    2819 container.go:412] Failed to create summary reader for "/kubepods/besteffort/pod55156b3a-02ea-4d16-a3b1-38b4c28b36bd/2e6e9784ea6c16c04af64c057d9e3632861c5583a1350ec89c8da5d228929475": none of the resources are being tracked.
	Jun 10 14:12:30 ingress-addon-legacy-433000 kubelet[2819]: W0610 14:12:30.387370    2819 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-g5jg6 through plugin: invalid network status for
	Jun 10 14:12:30 ingress-addon-legacy-433000 kubelet[2819]: I0610 14:12:30.395418    2819 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 792aea3128195415622409d8801668ea85d58a6e1822d063741ae86337d1fd10
	Jun 10 14:12:30 ingress-addon-legacy-433000 kubelet[2819]: I0610 14:12:30.395823    2819 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 2e6e9784ea6c16c04af64c057d9e3632861c5583a1350ec89c8da5d228929475
	Jun 10 14:12:30 ingress-addon-legacy-433000 kubelet[2819]: E0610 14:12:30.396215    2819 pod_workers.go:191] Error syncing pod 55156b3a-02ea-4d16-a3b1-38b4c28b36bd ("hello-world-app-5f5d8b66bb-g5jg6_default(55156b3a-02ea-4d16-a3b1-38b4c28b36bd)"), skipping: failed to "StartContainer" for "hello-world-app" with CrashLoopBackOff: "back-off 20s restarting failed container=hello-world-app pod=hello-world-app-5f5d8b66bb-g5jg6_default(55156b3a-02ea-4d16-a3b1-38b4c28b36bd)"
	Jun 10 14:12:31 ingress-addon-legacy-433000 kubelet[2819]: W0610 14:12:31.404894    2819 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for default/hello-world-app-5f5d8b66bb-g5jg6 through plugin: invalid network status for
	Jun 10 14:12:33 ingress-addon-legacy-433000 kubelet[2819]: E0610 14:12:33.855909    2819 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-ttdgk.176751a4d472544c", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-ttdgk", UID:"7dd16e8f-5591-4299-8641-50d2b25b4049", APIVersion:"v1", ResourceVersion:"439", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-433000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1193f1472eb6a4c, ext:96864505165, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1193f1472eb6a4c, ext:96864505165, loc:(*time.Location)(0x6a0ef20)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-ttdgk.176751a4d472544c" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jun 10 14:12:33 ingress-addon-legacy-433000 kubelet[2819]: E0610 14:12:33.860390    2819 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-ttdgk.176751a4d472544c", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-ttdgk", UID:"7dd16e8f-5591-4299-8641-50d2b25b4049", APIVersion:"v1", ResourceVersion:"439", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-433000"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc1193f1472eb6a4c, ext:96864505165, loc:(*time.Location)(0x6a0ef20)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc1193f147327ff17, ext:96868475417, loc:(*time.Location)(0x6a0ef20)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-ttdgk.176751a4d472544c" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jun 10 14:12:36 ingress-addon-legacy-433000 kubelet[2819]: W0610 14:12:36.486111    2819 pod_container_deletor.go:77] Container "447c2b08741076b472386812a4248fa8c340897e927497276e7878f8c0a32026" not found in pod's containers
	Jun 10 14:12:38 ingress-addon-legacy-433000 kubelet[2819]: I0610 14:12:38.043572    2819 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7dd16e8f-5591-4299-8641-50d2b25b4049-webhook-cert") pod "7dd16e8f-5591-4299-8641-50d2b25b4049" (UID: "7dd16e8f-5591-4299-8641-50d2b25b4049")
	Jun 10 14:12:38 ingress-addon-legacy-433000 kubelet[2819]: I0610 14:12:38.044378    2819 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-6rdbs" (UniqueName: "kubernetes.io/secret/7dd16e8f-5591-4299-8641-50d2b25b4049-ingress-nginx-token-6rdbs") pod "7dd16e8f-5591-4299-8641-50d2b25b4049" (UID: "7dd16e8f-5591-4299-8641-50d2b25b4049")
	Jun 10 14:12:38 ingress-addon-legacy-433000 kubelet[2819]: I0610 14:12:38.053029    2819 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dd16e8f-5591-4299-8641-50d2b25b4049-ingress-nginx-token-6rdbs" (OuterVolumeSpecName: "ingress-nginx-token-6rdbs") pod "7dd16e8f-5591-4299-8641-50d2b25b4049" (UID: "7dd16e8f-5591-4299-8641-50d2b25b4049"). InnerVolumeSpecName "ingress-nginx-token-6rdbs". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 10 14:12:38 ingress-addon-legacy-433000 kubelet[2819]: I0610 14:12:38.055022    2819 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dd16e8f-5591-4299-8641-50d2b25b4049-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "7dd16e8f-5591-4299-8641-50d2b25b4049" (UID: "7dd16e8f-5591-4299-8641-50d2b25b4049"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jun 10 14:12:38 ingress-addon-legacy-433000 kubelet[2819]: I0610 14:12:38.144992    2819 reconciler.go:319] Volume detached for volume "ingress-nginx-token-6rdbs" (UniqueName: "kubernetes.io/secret/7dd16e8f-5591-4299-8641-50d2b25b4049-ingress-nginx-token-6rdbs") on node "ingress-addon-legacy-433000" DevicePath ""
	Jun 10 14:12:38 ingress-addon-legacy-433000 kubelet[2819]: I0610 14:12:38.145074    2819 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/7dd16e8f-5591-4299-8641-50d2b25b4049-webhook-cert") on node "ingress-addon-legacy-433000" DevicePath ""
	Jun 10 14:12:39 ingress-addon-legacy-433000 kubelet[2819]: W0610 14:12:39.393660    2819 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/7dd16e8f-5591-4299-8641-50d2b25b4049/volumes" does not exist
	
	* 
	* ==> storage-provisioner [09a3aa184b2c] <==
	* I0610 14:11:18.707065       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0610 14:11:18.712155       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0610 14:11:18.712174       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0610 14:11:18.714510       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0610 14:11:18.714837       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"efb76256-4bc2-48ae-8bbf-ea33e3817c6d", APIVersion:"v1", ResourceVersion:"387", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-433000_6d4d1645-2dcc-467d-b06c-b3ed8a16586a became leader
	I0610 14:11:18.714951       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-433000_6d4d1645-2dcc-467d-b06c-b3ed8a16586a!
	I0610 14:11:18.815757       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-433000_6d4d1645-2dcc-467d-b06c-b3ed8a16586a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p ingress-addon-legacy-433000 -n ingress-addon-legacy-433000
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-433000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (56.11s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.3s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-043000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-043000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.227524041s)

                                                
                                                
-- stdout --
	* [mount-start-1-043000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-043000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-043000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-043000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-043000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-043000 -n mount-start-1-043000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-043000 -n mount-start-1-043000: exit status 7 (68.136084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-043000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-214000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:85: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-214000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (10.049451958s)

                                                
                                                
-- stdout --
	* [multinode-214000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-214000 in cluster multinode-214000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-214000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:14:47.350622    2416 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:14:47.350763    2416 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:14:47.350766    2416 out.go:309] Setting ErrFile to fd 2...
	I0610 07:14:47.350768    2416 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:14:47.350837    2416 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:14:47.351913    2416 out.go:303] Setting JSON to false
	I0610 07:14:47.367215    2416 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":857,"bootTime":1686405630,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:14:47.367287    2416 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:14:47.372431    2416 out.go:177] * [multinode-214000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:14:47.378352    2416 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:14:47.378406    2416 notify.go:220] Checking for updates...
	I0610 07:14:47.382364    2416 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:14:47.389282    2416 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:14:47.393358    2416 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:14:47.396313    2416 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:14:47.400313    2416 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:14:47.403520    2416 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:14:47.407318    2416 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 07:14:47.414392    2416 start.go:297] selected driver: qemu2
	I0610 07:14:47.414397    2416 start.go:875] validating driver "qemu2" against <nil>
	I0610 07:14:47.414403    2416 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:14:47.416326    2416 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 07:14:47.419318    2416 out.go:177] * Automatically selected the socket_vmnet network
	I0610 07:14:47.422395    2416 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 07:14:47.422414    2416 cni.go:84] Creating CNI manager for ""
	I0610 07:14:47.422421    2416 cni.go:136] 0 nodes found, recommending kindnet
	I0610 07:14:47.422427    2416 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0610 07:14:47.422433    2416 start_flags.go:319] config:
	{Name:multinode-214000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-214000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:14:47.422533    2416 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:14:47.430334    2416 out.go:177] * Starting control plane node multinode-214000 in cluster multinode-214000
	I0610 07:14:47.434364    2416 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:14:47.434392    2416 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 07:14:47.434406    2416 cache.go:57] Caching tarball of preloaded images
	I0610 07:14:47.434492    2416 preload.go:174] Found /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 07:14:47.434498    2416 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 07:14:47.434699    2416 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/multinode-214000/config.json ...
	I0610 07:14:47.434711    2416 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/multinode-214000/config.json: {Name:mkf727562732d45958def80fd52f6d5bb12217f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:14:47.434924    2416 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:14:47.434937    2416 start.go:364] acquiring machines lock for multinode-214000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:14:47.434967    2416 start.go:368] acquired machines lock for "multinode-214000" in 24.833µs
	I0610 07:14:47.434979    2416 start.go:93] Provisioning new machine with config: &{Name:multinode-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-214000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:14:47.435009    2416 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:14:47.439297    2416 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 07:14:47.455956    2416 start.go:159] libmachine.API.Create for "multinode-214000" (driver="qemu2")
	I0610 07:14:47.455980    2416 client.go:168] LocalClient.Create starting
	I0610 07:14:47.456037    2416 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:14:47.456057    2416 main.go:141] libmachine: Decoding PEM data...
	I0610 07:14:47.456072    2416 main.go:141] libmachine: Parsing certificate...
	I0610 07:14:47.456119    2416 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:14:47.456133    2416 main.go:141] libmachine: Decoding PEM data...
	I0610 07:14:47.456141    2416 main.go:141] libmachine: Parsing certificate...
	I0610 07:14:47.456469    2416 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:14:47.680410    2416 main.go:141] libmachine: Creating SSH key...
	I0610 07:14:47.816332    2416 main.go:141] libmachine: Creating Disk image...
	I0610 07:14:47.816340    2416 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:14:47.816505    2416 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/disk.qcow2
	I0610 07:14:47.829308    2416 main.go:141] libmachine: STDOUT: 
	I0610 07:14:47.829326    2416 main.go:141] libmachine: STDERR: 
	I0610 07:14:47.829372    2416 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/disk.qcow2 +20000M
	I0610 07:14:47.836520    2416 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:14:47.836532    2416 main.go:141] libmachine: STDERR: 
	I0610 07:14:47.836547    2416 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/disk.qcow2
	I0610 07:14:47.836552    2416 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:14:47.836586    2416 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:1b:18:3c:9c:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/disk.qcow2
	I0610 07:14:47.838118    2416 main.go:141] libmachine: STDOUT: 
	I0610 07:14:47.838136    2416 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:14:47.838155    2416 client.go:171] LocalClient.Create took 382.183125ms
	I0610 07:14:49.840315    2416 start.go:128] duration metric: createHost completed in 2.405355s
	I0610 07:14:49.840410    2416 start.go:83] releasing machines lock for "multinode-214000", held for 2.405515375s
	W0610 07:14:49.840461    2416 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:14:49.851699    2416 out.go:177] * Deleting "multinode-214000" in qemu2 ...
	W0610 07:14:49.870133    2416 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:14:49.870163    2416 start.go:702] Will try again in 5 seconds ...
	I0610 07:14:54.872295    2416 start.go:364] acquiring machines lock for multinode-214000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:14:54.872878    2416 start.go:368] acquired machines lock for "multinode-214000" in 438.125µs
	I0610 07:14:54.873035    2416 start.go:93] Provisioning new machine with config: &{Name:multinode-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-214000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:14:54.873377    2416 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:14:54.883060    2416 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 07:14:54.932993    2416 start.go:159] libmachine.API.Create for "multinode-214000" (driver="qemu2")
	I0610 07:14:54.933049    2416 client.go:168] LocalClient.Create starting
	I0610 07:14:54.933262    2416 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:14:54.933347    2416 main.go:141] libmachine: Decoding PEM data...
	I0610 07:14:54.933373    2416 main.go:141] libmachine: Parsing certificate...
	I0610 07:14:54.933470    2416 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:14:54.933504    2416 main.go:141] libmachine: Decoding PEM data...
	I0610 07:14:54.933525    2416 main.go:141] libmachine: Parsing certificate...
	I0610 07:14:54.934119    2416 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:14:55.254531    2416 main.go:141] libmachine: Creating SSH key...
	I0610 07:14:55.315700    2416 main.go:141] libmachine: Creating Disk image...
	I0610 07:14:55.315710    2416 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:14:55.315856    2416 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/disk.qcow2
	I0610 07:14:55.324278    2416 main.go:141] libmachine: STDOUT: 
	I0610 07:14:55.324291    2416 main.go:141] libmachine: STDERR: 
	I0610 07:14:55.324361    2416 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/disk.qcow2 +20000M
	I0610 07:14:55.331429    2416 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:14:55.331444    2416 main.go:141] libmachine: STDERR: 
	I0610 07:14:55.331472    2416 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/disk.qcow2
	I0610 07:14:55.331476    2416 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:14:55.331512    2416 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:aa:de:e4:6e:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/disk.qcow2
	I0610 07:14:55.333019    2416 main.go:141] libmachine: STDOUT: 
	I0610 07:14:55.333031    2416 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:14:55.333042    2416 client.go:171] LocalClient.Create took 399.997083ms
	I0610 07:14:57.335127    2416 start.go:128] duration metric: createHost completed in 2.461791417s
	I0610 07:14:57.335192    2416 start.go:83] releasing machines lock for "multinode-214000", held for 2.462373042s
	W0610 07:14:57.335586    2416 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-214000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-214000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:14:57.345186    2416 out.go:177] 
	W0610 07:14:57.349206    2416 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:14:57.349252    2416 out.go:239] * 
	* 
	W0610 07:14:57.351979    2416 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:14:57.360118    2416 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:87: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-214000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-214000 -n multinode-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-214000 -n multinode-214000: exit status 7 (69.3355ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-214000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.12s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (103.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-214000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:481: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-214000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (118.779708ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-214000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:483: failed to create busybox deployment to multinode cluster
multinode_test.go:486: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-214000 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-214000 -- rollout status deployment/busybox: exit status 1 (55.164584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-214000"

                                                
                                                
** /stderr **
multinode_test.go:488: failed to deploy busybox to multinode cluster
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-214000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-214000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (54.059542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-214000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-214000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-214000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.613084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-214000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-214000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-214000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.028375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-214000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-214000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-214000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.461417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-214000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-214000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-214000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-214000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-214000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-214000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.275917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-214000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-214000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-214000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.792083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-214000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-214000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-214000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.91125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-214000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-214000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-214000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.262125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-214000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
E0610 07:15:58.214321    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/functional-922000/client.crt: no such file or directory
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-214000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-214000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.902167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-214000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-214000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-214000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.557917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-214000"

                                                
                                                
** /stderr **
multinode_test.go:496: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:512: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:516: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-214000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:516: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-214000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (53.714792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-214000"

                                                
                                                
** /stderr **
multinode_test.go:518: failed get Pod names
multinode_test.go:524: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-214000 -- exec  -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-214000 -- exec  -- nslookup kubernetes.io: exit status 1 (53.807292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-214000"

                                                
                                                
** /stderr **
multinode_test.go:526: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-214000 -- exec  -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-214000 -- exec  -- nslookup kubernetes.default: exit status 1 (54.133917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-214000"

                                                
                                                
** /stderr **
multinode_test.go:536: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:542: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-214000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-214000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (54.406459ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-214000"

                                                
                                                
** /stderr **
multinode_test.go:544: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-214000 -n multinode-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-214000 -n multinode-214000: exit status 7 (28.965959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-214000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (103.19s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-214000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-214000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.146292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-214000"

                                                
                                                
** /stderr **
multinode_test.go:554: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-214000 -n multinode-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-214000 -n multinode-214000: exit status 7 (28.780917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-214000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-214000 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-214000 -v 3 --alsologtostderr: exit status 89 (40.138417ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-214000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:16:40.750909    2497 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:16:40.751097    2497 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:16:40.751100    2497 out.go:309] Setting ErrFile to fd 2...
	I0610 07:16:40.751102    2497 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:16:40.751167    2497 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:16:40.751389    2497 mustload.go:65] Loading cluster: multinode-214000
	I0610 07:16:40.751564    2497 config.go:182] Loaded profile config "multinode-214000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:16:40.756190    2497 out.go:177] * The control plane node must be running for this command
	I0610 07:16:40.759371    2497 out.go:177]   To start a cluster, run: "minikube start -p multinode-214000"

                                                
                                                
** /stderr **
multinode_test.go:112: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-214000 -v 3 --alsologtostderr" : exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-214000 -n multinode-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-214000 -n multinode-214000: exit status 7 (28.674ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-214000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:155: expected profile "multinode-214000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-214000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-214000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"VMDriver\":\"\",\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidd
en\":false,\"KVMNUMACount\":1,\"APIServerPort\":0,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.27.2\",\"ClusterName\":\"multinode-214000\",\"Namespace\":\"default\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\",\"NodeIP\":\"\",\"NodePort\":8443,\"NodeName\":\"\"},\"Nodes\":[{\"Name\":\"\
",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.27.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPat
h\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\"},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-214000 -n multinode-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-214000 -n multinode-214000: exit status 7 (28.894125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-214000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-214000 status --output json --alsologtostderr
multinode_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-214000 status --output json --alsologtostderr: exit status 7 (28.694583ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-214000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:16:40.925431    2507 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:16:40.925623    2507 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:16:40.925626    2507 out.go:309] Setting ErrFile to fd 2...
	I0610 07:16:40.925629    2507 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:16:40.925705    2507 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:16:40.925822    2507 out.go:303] Setting JSON to true
	I0610 07:16:40.925830    2507 mustload.go:65] Loading cluster: multinode-214000
	I0610 07:16:40.925895    2507 notify.go:220] Checking for updates...
	I0610 07:16:40.926016    2507 config.go:182] Loaded profile config "multinode-214000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:16:40.926021    2507 status.go:255] checking status of multinode-214000 ...
	I0610 07:16:40.926192    2507 status.go:330] multinode-214000 host status = "Stopped" (err=<nil>)
	I0610 07:16:40.926195    2507 status.go:343] host is not running, skipping remaining checks
	I0610 07:16:40.926198    2507 status.go:257] multinode-214000 status: &{Name:multinode-214000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:180: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-214000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-214000 -n multinode-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-214000 -n multinode-214000: exit status 7 (28.633041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-214000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-214000 node stop m03
multinode_test.go:210: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-214000 node stop m03: exit status 85 (45.695833ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:212: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-214000 node stop m03": exit status 85
multinode_test.go:216: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-214000 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-214000 status: exit status 7 (29.131708ms)

                                                
                                                
-- stdout --
	multinode-214000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-214000 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-214000 status --alsologtostderr: exit status 7 (28.640584ms)

                                                
                                                
-- stdout --
	multinode-214000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:16:41.058468    2515 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:16:41.058594    2515 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:16:41.058596    2515 out.go:309] Setting ErrFile to fd 2...
	I0610 07:16:41.058599    2515 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:16:41.058663    2515 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:16:41.058768    2515 out.go:303] Setting JSON to false
	I0610 07:16:41.058784    2515 mustload.go:65] Loading cluster: multinode-214000
	I0610 07:16:41.058850    2515 notify.go:220] Checking for updates...
	I0610 07:16:41.058970    2515 config.go:182] Loaded profile config "multinode-214000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:16:41.058975    2515 status.go:255] checking status of multinode-214000 ...
	I0610 07:16:41.059152    2515 status.go:330] multinode-214000 host status = "Stopped" (err=<nil>)
	I0610 07:16:41.059158    2515 status.go:343] host is not running, skipping remaining checks
	I0610 07:16:41.059160    2515 status.go:257] multinode-214000 status: &{Name:multinode-214000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:229: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-214000 status --alsologtostderr": multinode-214000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-214000 -n multinode-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-214000 -n multinode-214000: exit status 7 (28.805ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-214000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-214000 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-214000 node start m03 --alsologtostderr: exit status 85 (42.916917ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:16:41.116326    2519 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:16:41.116527    2519 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:16:41.116530    2519 out.go:309] Setting ErrFile to fd 2...
	I0610 07:16:41.116532    2519 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:16:41.116602    2519 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:16:41.116827    2519 mustload.go:65] Loading cluster: multinode-214000
	I0610 07:16:41.116994    2519 config.go:182] Loaded profile config "multinode-214000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:16:41.120196    2519 out.go:177] 
	W0610 07:16:41.123129    2519 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0610 07:16:41.123133    2519 out.go:239] * 
	* 
	W0610 07:16:41.124686    2519 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:16:41.128018    2519 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:256: I0610 07:16:41.116326    2519 out.go:296] Setting OutFile to fd 1 ...
I0610 07:16:41.116527    2519 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 07:16:41.116530    2519 out.go:309] Setting ErrFile to fd 2...
I0610 07:16:41.116532    2519 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 07:16:41.116602    2519 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
I0610 07:16:41.116827    2519 mustload.go:65] Loading cluster: multinode-214000
I0610 07:16:41.116994    2519 config.go:182] Loaded profile config "multinode-214000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0610 07:16:41.120196    2519 out.go:177] 
W0610 07:16:41.123129    2519 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0610 07:16:41.123133    2519 out.go:239] * 
* 
W0610 07:16:41.124686    2519 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0610 07:16:41.128018    2519 out.go:177] 
multinode_test.go:257: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-214000 node start m03 --alsologtostderr": exit status 85
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-214000 status
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-214000 status: exit status 7 (28.495834ms)

                                                
                                                
-- stdout --
	multinode-214000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:263: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-214000 status" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-214000 -n multinode-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-214000 -n multinode-214000: exit status 7 (28.263459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-214000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (5.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-214000
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-214000
multinode_test.go:295: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-214000 --wait=true -v=8 --alsologtostderr
E0610 07:16:45.033805    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/client.crt: no such file or directory
E0610 07:16:45.040211    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/client.crt: no such file or directory
E0610 07:16:45.052275    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/client.crt: no such file or directory
E0610 07:16:45.074436    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/client.crt: no such file or directory
E0610 07:16:45.116359    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/client.crt: no such file or directory
E0610 07:16:45.198530    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/client.crt: no such file or directory
E0610 07:16:45.360670    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/client.crt: no such file or directory
E0610 07:16:45.682852    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/client.crt: no such file or directory
E0610 07:16:46.325048    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/client.crt: no such file or directory
multinode_test.go:295: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-214000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.173478375s)

                                                
                                                
-- stdout --
	* [multinode-214000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-214000 in cluster multinode-214000
	* Restarting existing qemu2 VM for "multinode-214000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-214000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:16:41.302884    2529 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:16:41.303049    2529 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:16:41.303052    2529 out.go:309] Setting ErrFile to fd 2...
	I0610 07:16:41.303055    2529 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:16:41.303128    2529 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:16:41.304031    2529 out.go:303] Setting JSON to false
	I0610 07:16:41.318832    2529 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":971,"bootTime":1686405630,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:16:41.318891    2529 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:16:41.324188    2529 out.go:177] * [multinode-214000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:16:41.330230    2529 notify.go:220] Checking for updates...
	I0610 07:16:41.334184    2529 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:16:41.337169    2529 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:16:41.340186    2529 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:16:41.343159    2529 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:16:41.344480    2529 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:16:41.347164    2529 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:16:41.350729    2529 config.go:182] Loaded profile config "multinode-214000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:16:41.350795    2529 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:16:41.354985    2529 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 07:16:41.362138    2529 start.go:297] selected driver: qemu2
	I0610 07:16:41.362142    2529 start.go:875] validating driver "qemu2" against &{Name:multinode-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:multinode-214000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:16:41.362193    2529 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:16:41.364037    2529 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 07:16:41.364061    2529 cni.go:84] Creating CNI manager for ""
	I0610 07:16:41.364066    2529 cni.go:136] 1 nodes found, recommending kindnet
	I0610 07:16:41.364073    2529 start_flags.go:319] config:
	{Name:multinode-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-214000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:16:41.364173    2529 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:16:41.367117    2529 out.go:177] * Starting control plane node multinode-214000 in cluster multinode-214000
	I0610 07:16:41.375159    2529 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:16:41.375185    2529 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 07:16:41.375196    2529 cache.go:57] Caching tarball of preloaded images
	I0610 07:16:41.375256    2529 preload.go:174] Found /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 07:16:41.375261    2529 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 07:16:41.375330    2529 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/multinode-214000/config.json ...
	I0610 07:16:41.375645    2529 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:16:41.375655    2529 start.go:364] acquiring machines lock for multinode-214000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:16:41.375685    2529 start.go:368] acquired machines lock for "multinode-214000" in 24.125µs
	I0610 07:16:41.375695    2529 start.go:96] Skipping create...Using existing machine configuration
	I0610 07:16:41.375701    2529 fix.go:55] fixHost starting: 
	I0610 07:16:41.375820    2529 fix.go:103] recreateIfNeeded on multinode-214000: state=Stopped err=<nil>
	W0610 07:16:41.375828    2529 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 07:16:41.384102    2529 out.go:177] * Restarting existing qemu2 VM for "multinode-214000" ...
	I0610 07:16:41.388137    2529 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:aa:de:e4:6e:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/disk.qcow2
	I0610 07:16:41.390145    2529 main.go:141] libmachine: STDOUT: 
	I0610 07:16:41.390165    2529 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:16:41.390209    2529 fix.go:57] fixHost completed within 14.501333ms
	I0610 07:16:41.390215    2529 start.go:83] releasing machines lock for "multinode-214000", held for 14.526708ms
	W0610 07:16:41.390229    2529 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:16:41.390269    2529 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:16:41.390273    2529 start.go:702] Will try again in 5 seconds ...
	I0610 07:16:46.392357    2529 start.go:364] acquiring machines lock for multinode-214000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:16:46.392721    2529 start.go:368] acquired machines lock for "multinode-214000" in 280.75µs
	I0610 07:16:46.392851    2529 start.go:96] Skipping create...Using existing machine configuration
	I0610 07:16:46.392872    2529 fix.go:55] fixHost starting: 
	I0610 07:16:46.393609    2529 fix.go:103] recreateIfNeeded on multinode-214000: state=Stopped err=<nil>
	W0610 07:16:46.393640    2529 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 07:16:46.402084    2529 out.go:177] * Restarting existing qemu2 VM for "multinode-214000" ...
	I0610 07:16:46.406262    2529 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:aa:de:e4:6e:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/disk.qcow2
	I0610 07:16:46.415497    2529 main.go:141] libmachine: STDOUT: 
	I0610 07:16:46.415547    2529 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:16:46.415638    2529 fix.go:57] fixHost completed within 22.76875ms
	I0610 07:16:46.415652    2529 start.go:83] releasing machines lock for "multinode-214000", held for 22.908375ms
	W0610 07:16:46.415912    2529 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-214000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-214000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:16:46.423115    2529 out.go:177] 
	W0610 07:16:46.427243    2529 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:16:46.427270    2529 out.go:239] * 
	* 
	W0610 07:16:46.429813    2529 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:16:46.436962    2529 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:297: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-214000" : exit status 80
multinode_test.go:300: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-214000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-214000 -n multinode-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-214000 -n multinode-214000: exit status 7 (31.808709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-214000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (5.36s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-214000 node delete m03
multinode_test.go:394: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-214000 node delete m03: exit status 89 (38.508375ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-214000"

                                                
                                                
-- /stdout --
multinode_test.go:396: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-214000 node delete m03": exit status 89
multinode_test.go:400: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-214000 status --alsologtostderr
multinode_test.go:400: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-214000 status --alsologtostderr: exit status 7 (28.579958ms)

                                                
                                                
-- stdout --
	multinode-214000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:16:46.615968    2545 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:16:46.616133    2545 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:16:46.616135    2545 out.go:309] Setting ErrFile to fd 2...
	I0610 07:16:46.616138    2545 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:16:46.616206    2545 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:16:46.616316    2545 out.go:303] Setting JSON to false
	I0610 07:16:46.616326    2545 mustload.go:65] Loading cluster: multinode-214000
	I0610 07:16:46.616389    2545 notify.go:220] Checking for updates...
	I0610 07:16:46.616503    2545 config.go:182] Loaded profile config "multinode-214000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:16:46.616509    2545 status.go:255] checking status of multinode-214000 ...
	I0610 07:16:46.616690    2545 status.go:330] multinode-214000 host status = "Stopped" (err=<nil>)
	I0610 07:16:46.616694    2545 status.go:343] host is not running, skipping remaining checks
	I0610 07:16:46.616696    2545 status.go:257] multinode-214000 status: &{Name:multinode-214000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:402: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-214000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-214000 -n multinode-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-214000 -n multinode-214000: exit status 7 (28.970667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-214000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-214000 stop
multinode_test.go:320: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-214000 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-214000 status: exit status 7 (28.629125ms)

                                                
                                                
-- stdout --
	multinode-214000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-214000 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-214000 status --alsologtostderr: exit status 7 (28.4735ms)

                                                
                                                
-- stdout --
	multinode-214000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:16:46.762512    2553 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:16:46.762671    2553 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:16:46.762674    2553 out.go:309] Setting ErrFile to fd 2...
	I0610 07:16:46.762676    2553 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:16:46.762761    2553 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:16:46.762868    2553 out.go:303] Setting JSON to false
	I0610 07:16:46.762878    2553 mustload.go:65] Loading cluster: multinode-214000
	I0610 07:16:46.762952    2553 notify.go:220] Checking for updates...
	I0610 07:16:46.763074    2553 config.go:182] Loaded profile config "multinode-214000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:16:46.763080    2553 status.go:255] checking status of multinode-214000 ...
	I0610 07:16:46.763276    2553 status.go:330] multinode-214000 host status = "Stopped" (err=<nil>)
	I0610 07:16:46.763280    2553 status.go:343] host is not running, skipping remaining checks
	I0610 07:16:46.763282    2553 status.go:257] multinode-214000 status: &{Name:multinode-214000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:333: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-214000 status --alsologtostderr": multinode-214000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:337: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-214000 status --alsologtostderr": multinode-214000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-214000 -n multinode-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-214000 -n multinode-214000: exit status 7 (28.322083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-214000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-214000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
E0610 07:16:47.607622    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/client.crt: no such file or directory
E0610 07:16:50.169986    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/client.crt: no such file or directory
multinode_test.go:354: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-214000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.174671542s)

                                                
                                                
-- stdout --
	* [multinode-214000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node multinode-214000 in cluster multinode-214000
	* Restarting existing qemu2 VM for "multinode-214000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-214000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:16:46.819192    2557 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:16:46.819286    2557 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:16:46.819290    2557 out.go:309] Setting ErrFile to fd 2...
	I0610 07:16:46.819293    2557 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:16:46.819367    2557 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:16:46.820337    2557 out.go:303] Setting JSON to false
	I0610 07:16:46.835355    2557 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":976,"bootTime":1686405630,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:16:46.835650    2557 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:16:46.840074    2557 out.go:177] * [multinode-214000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:16:46.847107    2557 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:16:46.847104    2557 notify.go:220] Checking for updates...
	I0610 07:16:46.851037    2557 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:16:46.852171    2557 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:16:46.855035    2557 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:16:46.858084    2557 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:16:46.861046    2557 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:16:46.864195    2557 config.go:182] Loaded profile config "multinode-214000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:16:46.864438    2557 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:16:46.869023    2557 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 07:16:46.876009    2557 start.go:297] selected driver: qemu2
	I0610 07:16:46.876015    2557 start.go:875] validating driver "qemu2" against &{Name:multinode-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:multinode-214000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2
62144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:16:46.876071    2557 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:16:46.877979    2557 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 07:16:46.878008    2557 cni.go:84] Creating CNI manager for ""
	I0610 07:16:46.878013    2557 cni.go:136] 1 nodes found, recommending kindnet
	I0610 07:16:46.878019    2557 start_flags.go:319] config:
	{Name:multinode-214000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:multinode-214000 Namespace:default API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:16:46.878138    2557 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:16:46.890083    2557 out.go:177] * Starting control plane node multinode-214000 in cluster multinode-214000
	I0610 07:16:46.894000    2557 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:16:46.894016    2557 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 07:16:46.894029    2557 cache.go:57] Caching tarball of preloaded images
	I0610 07:16:46.894128    2557 preload.go:174] Found /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 07:16:46.894138    2557 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 07:16:46.894199    2557 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/multinode-214000/config.json ...
	I0610 07:16:46.894597    2557 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:16:46.894609    2557 start.go:364] acquiring machines lock for multinode-214000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:16:46.894639    2557 start.go:368] acquired machines lock for "multinode-214000" in 24.375µs
	I0610 07:16:46.894650    2557 start.go:96] Skipping create...Using existing machine configuration
	I0610 07:16:46.894656    2557 fix.go:55] fixHost starting: 
	I0610 07:16:46.894785    2557 fix.go:103] recreateIfNeeded on multinode-214000: state=Stopped err=<nil>
	W0610 07:16:46.894794    2557 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 07:16:46.903018    2557 out.go:177] * Restarting existing qemu2 VM for "multinode-214000" ...
	I0610 07:16:46.907083    2557 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:aa:de:e4:6e:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/disk.qcow2
	I0610 07:16:46.909042    2557 main.go:141] libmachine: STDOUT: 
	I0610 07:16:46.909057    2557 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:16:46.909088    2557 fix.go:57] fixHost completed within 14.432333ms
	I0610 07:16:46.909092    2557 start.go:83] releasing machines lock for "multinode-214000", held for 14.448625ms
	W0610 07:16:46.909098    2557 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:16:46.909133    2557 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:16:46.909137    2557 start.go:702] Will try again in 5 seconds ...
	I0610 07:16:51.911286    2557 start.go:364] acquiring machines lock for multinode-214000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:16:51.911725    2557 start.go:368] acquired machines lock for "multinode-214000" in 367.291µs
	I0610 07:16:51.911846    2557 start.go:96] Skipping create...Using existing machine configuration
	I0610 07:16:51.911868    2557 fix.go:55] fixHost starting: 
	I0610 07:16:51.912612    2557 fix.go:103] recreateIfNeeded on multinode-214000: state=Stopped err=<nil>
	W0610 07:16:51.912637    2557 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 07:16:51.916949    2557 out.go:177] * Restarting existing qemu2 VM for "multinode-214000" ...
	I0610 07:16:51.925192    2557 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:aa:de:e4:6e:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/multinode-214000/disk.qcow2
	I0610 07:16:51.934534    2557 main.go:141] libmachine: STDOUT: 
	I0610 07:16:51.934580    2557 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:16:51.934712    2557 fix.go:57] fixHost completed within 22.847084ms
	I0610 07:16:51.934726    2557 start.go:83] releasing machines lock for "multinode-214000", held for 22.979666ms
	W0610 07:16:51.934917    2557 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-214000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-214000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:16:51.941998    2557 out.go:177] 
	W0610 07:16:51.944974    2557 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:16:51.944999    2557 out.go:239] * 
	* 
	W0610 07:16:51.947548    2557 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:16:51.954925    2557 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:356: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-214000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-214000 -n multinode-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-214000 -n multinode-214000: exit status 7 (68.58725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-214000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.24s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-214000
multinode_test.go:452: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-214000-m01 --driver=qemu2 
E0610 07:16:55.292470    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/client.crt: no such file or directory
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-214000-m01 --driver=qemu2 : exit status 80 (10.132150834s)

                                                
                                                
-- stdout --
	* [multinode-214000-m01] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-214000-m01 in cluster multinode-214000-m01
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-214000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-214000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-214000-m02 --driver=qemu2 
E0610 07:17:05.534808    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/client.crt: no such file or directory
multinode_test.go:460: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-214000-m02 --driver=qemu2 : exit status 80 (9.96579675s)

                                                
                                                
-- stdout --
	* [multinode-214000-m02] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node multinode-214000-m02 in cluster multinode-214000-m02
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-214000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-214000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:462: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-214000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-214000
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-214000: exit status 89 (79.570625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p multinode-214000"

                                                
                                                
-- /stdout --
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-214000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-214000 -n multinode-214000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-214000 -n multinode-214000: exit status 7 (29.574834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-214000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.34s)

                                                
                                    
x
+
TestPreload (10.44s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-258000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-258000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (10.267863166s)

                                                
                                                
-- stdout --
	* [test-preload-258000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node test-preload-258000 in cluster test-preload-258000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-258000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:17:12.525741    2608 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:17:12.525869    2608 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:17:12.525873    2608 out.go:309] Setting ErrFile to fd 2...
	I0610 07:17:12.525875    2608 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:17:12.525946    2608 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:17:12.526997    2608 out.go:303] Setting JSON to false
	I0610 07:17:12.542086    2608 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1002,"bootTime":1686405630,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:17:12.542151    2608 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:17:12.547473    2608 out.go:177] * [test-preload-258000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:17:12.554373    2608 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:17:12.558443    2608 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:17:12.554439    2608 notify.go:220] Checking for updates...
	I0610 07:17:12.559749    2608 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:17:12.562351    2608 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:17:12.565411    2608 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:17:12.568453    2608 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:17:12.571838    2608 config.go:182] Loaded profile config "multinode-214000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:17:12.571884    2608 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:17:12.576386    2608 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 07:17:12.583373    2608 start.go:297] selected driver: qemu2
	I0610 07:17:12.583378    2608 start.go:875] validating driver "qemu2" against <nil>
	I0610 07:17:12.583384    2608 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:17:12.585384    2608 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 07:17:12.588435    2608 out.go:177] * Automatically selected the socket_vmnet network
	I0610 07:17:12.591477    2608 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 07:17:12.591499    2608 cni.go:84] Creating CNI manager for ""
	I0610 07:17:12.591507    2608 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 07:17:12.591521    2608 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 07:17:12.591528    2608 start_flags.go:319] config:
	{Name:test-preload-258000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-258000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:17:12.591623    2608 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:17:12.599372    2608 out.go:177] * Starting control plane node test-preload-258000 in cluster test-preload-258000
	I0610 07:17:12.602396    2608 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0610 07:17:12.602469    2608 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/test-preload-258000/config.json ...
	I0610 07:17:12.602484    2608 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/test-preload-258000/config.json: {Name:mkee82835aef9d931f3d6670962c95d00fd5facf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:17:12.602528    2608 cache.go:107] acquiring lock: {Name:mkd519abf5e7bdc4de317bde871fdd7798a8a059 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:17:12.602528    2608 cache.go:107] acquiring lock: {Name:mkaf236e56782bba9b6c8c54257bd71547baa2ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:17:12.602559    2608 cache.go:107] acquiring lock: {Name:mk008789d1c64247865704387a5e8dd765937b3d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:17:12.602728    2608 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:17:12.602730    2608 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 07:17:12.602741    2608 start.go:364] acquiring machines lock for test-preload-258000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:17:12.602755    2608 cache.go:107] acquiring lock: {Name:mkac6a819d2eeaa6289491896dbf1a46d5ad6067 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:17:12.602750    2608 cache.go:107] acquiring lock: {Name:mk1855618745b36daccfca09f6240fcfb4b34dde Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:17:12.602778    2608 start.go:368] acquired machines lock for "test-preload-258000" in 32µs
	I0610 07:17:12.602798    2608 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0610 07:17:12.602791    2608 start.go:93] Provisioning new machine with config: &{Name:test-preload-258000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-258000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:17:12.602816    2608 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0610 07:17:12.602816    2608 cache.go:107] acquiring lock: {Name:mkae5acc741f946caa19e07c583298cc7a88952d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:17:12.602845    2608 cache.go:107] acquiring lock: {Name:mkdb56fe62f41e2c03b487fa30a236bac39c70a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:17:12.602807    2608 cache.go:107] acquiring lock: {Name:mk570cda65d586a63024db0c1ead5ecf346eda46 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:17:12.602820    2608 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:17:12.602877    2608 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0610 07:17:12.612348    2608 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 07:17:12.602959    2608 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0610 07:17:12.602986    2608 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0610 07:17:12.603103    2608 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0610 07:17:12.608053    2608 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0610 07:17:12.617294    2608 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0610 07:17:12.617998    2608 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0610 07:17:12.618942    2608 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0610 07:17:12.619056    2608 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0610 07:17:12.621642    2608 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0610 07:17:12.621955    2608 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0610 07:17:12.622131    2608 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0610 07:17:12.623159    2608 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0610 07:17:12.628984    2608 start.go:159] libmachine.API.Create for "test-preload-258000" (driver="qemu2")
	I0610 07:17:12.629005    2608 client.go:168] LocalClient.Create starting
	I0610 07:17:12.629069    2608 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:17:12.629091    2608 main.go:141] libmachine: Decoding PEM data...
	I0610 07:17:12.629102    2608 main.go:141] libmachine: Parsing certificate...
	I0610 07:17:12.629151    2608 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:17:12.629169    2608 main.go:141] libmachine: Decoding PEM data...
	I0610 07:17:12.629176    2608 main.go:141] libmachine: Parsing certificate...
	I0610 07:17:12.630261    2608 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:17:12.866805    2608 main.go:141] libmachine: Creating SSH key...
	I0610 07:17:12.906202    2608 main.go:141] libmachine: Creating Disk image...
	I0610 07:17:12.906210    2608 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:17:12.906345    2608 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/test-preload-258000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/test-preload-258000/disk.qcow2
	I0610 07:17:12.914866    2608 main.go:141] libmachine: STDOUT: 
	I0610 07:17:12.914888    2608 main.go:141] libmachine: STDERR: 
	I0610 07:17:12.914951    2608 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/test-preload-258000/disk.qcow2 +20000M
	I0610 07:17:12.922426    2608 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:17:12.922437    2608 main.go:141] libmachine: STDERR: 
	I0610 07:17:12.922455    2608 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/test-preload-258000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/test-preload-258000/disk.qcow2
	I0610 07:17:12.922472    2608 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:17:12.922523    2608 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/test-preload-258000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/test-preload-258000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/test-preload-258000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:fc:d8:45:78:54 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/test-preload-258000/disk.qcow2
	I0610 07:17:12.924228    2608 main.go:141] libmachine: STDOUT: 
	I0610 07:17:12.924242    2608 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:17:12.924263    2608 client.go:171] LocalClient.Create took 295.261584ms
	I0610 07:17:13.925850    2608 cache.go:162] opening:  /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0610 07:17:14.209005    2608 cache.go:162] opening:  /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	W0610 07:17:14.269220    2608 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0610 07:17:14.269245    2608 cache.go:162] opening:  /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0610 07:17:14.386266    2608 cache.go:162] opening:  /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0610 07:17:14.392036    2608 cache.go:162] opening:  /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0610 07:17:14.499204    2608 cache.go:157] /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0610 07:17:14.499223    2608 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.896519s
	I0610 07:17:14.499232    2608 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0610 07:17:14.588306    2608 cache.go:162] opening:  /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0610 07:17:14.741069    2608 cache.go:162] opening:  /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	W0610 07:17:14.890828    2608 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0610 07:17:14.890936    2608 cache.go:162] opening:  /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0610 07:17:14.918537    2608 cache.go:157] /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0610 07:17:14.918578    2608 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.316112417s
	I0610 07:17:14.918628    2608 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0610 07:17:14.925331    2608 start.go:128] duration metric: createHost completed in 2.322517125s
	I0610 07:17:14.925369    2608 start.go:83] releasing machines lock for "test-preload-258000", held for 2.32264825s
	W0610 07:17:14.925409    2608 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:17:14.941499    2608 out.go:177] * Deleting "test-preload-258000" in qemu2 ...
	W0610 07:17:14.960308    2608 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:17:14.960341    2608 start.go:702] Will try again in 5 seconds ...
	I0610 07:17:16.085829    2608 cache.go:157] /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0610 07:17:16.085876    2608 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.4831655s
	I0610 07:17:16.085934    2608 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0610 07:17:16.813047    2608 cache.go:157] /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0610 07:17:16.813115    2608 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.2105725s
	I0610 07:17:16.813145    2608 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0610 07:17:16.957480    2608 cache.go:157] /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0610 07:17:16.957520    2608 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 4.354938166s
	I0610 07:17:16.957544    2608 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0610 07:17:19.735520    2608 cache.go:157] /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0610 07:17:19.735562    2608 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 7.133262833s
	I0610 07:17:19.735624    2608 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0610 07:17:19.960314    2608 start.go:364] acquiring machines lock for test-preload-258000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:17:19.960721    2608 start.go:368] acquired machines lock for "test-preload-258000" in 334.167µs
	I0610 07:17:19.960814    2608 start.go:93] Provisioning new machine with config: &{Name:test-preload-258000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-258000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:17:19.961034    2608 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:17:19.975494    2608 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 07:17:20.022304    2608 start.go:159] libmachine.API.Create for "test-preload-258000" (driver="qemu2")
	I0610 07:17:20.022354    2608 client.go:168] LocalClient.Create starting
	I0610 07:17:20.022514    2608 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:17:20.022577    2608 main.go:141] libmachine: Decoding PEM data...
	I0610 07:17:20.022606    2608 main.go:141] libmachine: Parsing certificate...
	I0610 07:17:20.022709    2608 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:17:20.022744    2608 main.go:141] libmachine: Decoding PEM data...
	I0610 07:17:20.022760    2608 main.go:141] libmachine: Parsing certificate...
	I0610 07:17:20.023329    2608 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:17:20.073902    2608 cache.go:157] /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0610 07:17:20.073925    2608 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 7.471604458s
	I0610 07:17:20.073937    2608 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0610 07:17:20.463408    2608 main.go:141] libmachine: Creating SSH key...
	I0610 07:17:20.711468    2608 main.go:141] libmachine: Creating Disk image...
	I0610 07:17:20.711483    2608 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:17:20.711640    2608 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/test-preload-258000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/test-preload-258000/disk.qcow2
	I0610 07:17:20.720858    2608 main.go:141] libmachine: STDOUT: 
	I0610 07:17:20.720899    2608 main.go:141] libmachine: STDERR: 
	I0610 07:17:20.720965    2608 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/test-preload-258000/disk.qcow2 +20000M
	I0610 07:17:20.728285    2608 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:17:20.728298    2608 main.go:141] libmachine: STDERR: 
	I0610 07:17:20.728320    2608 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/test-preload-258000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/test-preload-258000/disk.qcow2
	I0610 07:17:20.728328    2608 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:17:20.728374    2608 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/test-preload-258000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/test-preload-258000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/test-preload-258000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:ae:d7:65:04:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/test-preload-258000/disk.qcow2
	I0610 07:17:20.729929    2608 main.go:141] libmachine: STDOUT: 
	I0610 07:17:20.729942    2608 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:17:20.729955    2608 client.go:171] LocalClient.Create took 707.61725ms
	I0610 07:17:22.730597    2608 start.go:128] duration metric: createHost completed in 2.769579584s
	I0610 07:17:22.730691    2608 start.go:83] releasing machines lock for "test-preload-258000", held for 2.770031042s
	W0610 07:17:22.731028    2608 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-258000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-258000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:17:22.739504    2608 out.go:177] 
	W0610 07:17:22.742662    2608 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:17:22.742687    2608 out.go:239] * 
	* 
	W0610 07:17:22.745213    2608 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:17:22.752590    2608 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-258000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:522: *** TestPreload FAILED at 2023-06-10 07:17:22.768814 -0700 PDT m=+851.076269334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-258000 -n test-preload-258000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-258000 -n test-preload-258000: exit status 7 (67.223875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-258000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-258000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-258000
--- FAIL: TestPreload (10.44s)

                                                
                                    
x
+
TestScheduledStopUnix (9.98s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-976000 --memory=2048 --driver=qemu2 
E0610 07:17:26.016875    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-976000 --memory=2048 --driver=qemu2 : exit status 80 (9.81093075s)

                                                
                                                
-- stdout --
	* [scheduled-stop-976000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-976000 in cluster scheduled-stop-976000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-976000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-976000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-976000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node scheduled-stop-976000 in cluster scheduled-stop-976000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-976000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-976000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestScheduledStopUnix FAILED at 2023-06-10 07:17:32.747311 -0700 PDT m=+861.055070168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-976000 -n scheduled-stop-976000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-976000 -n scheduled-stop-976000: exit status 7 (71.947375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-976000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-976000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-976000
--- FAIL: TestScheduledStopUnix (9.98s)

                                                
                                    
x
+
TestSkaffold (14.15s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe1541502288 version
skaffold_test.go:63: skaffold version: v2.5.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-053000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-053000 --memory=2600 --driver=qemu2 : exit status 80 (9.649297208s)

                                                
                                                
-- stdout --
	* [skaffold-053000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-053000 in cluster skaffold-053000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-053000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-053000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-053000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node skaffold-053000 in cluster skaffold-053000
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-053000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-053000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:522: *** TestSkaffold FAILED at 2023-06-10 07:17:46.906709 -0700 PDT m=+875.214909293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-053000 -n skaffold-053000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-053000 -n skaffold-053000: exit status 7 (62.896709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-053000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-053000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-053000
--- FAIL: TestSkaffold (14.15s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (168.32s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
E0610 07:18:42.057675    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/functional-922000/client.crt: no such file or directory
E0610 07:19:28.897304    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/client.crt: no such file or directory
version_upgrade_test.go:106: v1.6.2 release installation failed: bad response code: 404
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-06-10 07:21:16.149004 -0700 PDT m=+1084.463840293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-726000 -n running-upgrade-726000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-726000 -n running-upgrade-726000: exit status 85 (84.486625ms)

                                                
                                                
-- stdout --
	* Profile "running-upgrade-726000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p running-upgrade-726000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "running-upgrade-726000" host is not running, skipping log retrieval (state="* Profile \"running-upgrade-726000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p running-upgrade-726000\"")
helpers_test.go:175: Cleaning up "running-upgrade-726000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-726000
--- FAIL: TestRunningBinaryUpgrade (168.32s)

                                                
                                    
x
+
TestKubernetesUpgrade (15.57s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-067000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-067000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (10.064089291s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-067000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubernetes-upgrade-067000 in cluster kubernetes-upgrade-067000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-067000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:21:16.497294    3090 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:21:16.497414    3090 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:21:16.497417    3090 out.go:309] Setting ErrFile to fd 2...
	I0610 07:21:16.497420    3090 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:21:16.497487    3090 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:21:16.498464    3090 out.go:303] Setting JSON to false
	I0610 07:21:16.513495    3090 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1246,"bootTime":1686405630,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:21:16.513566    3090 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:21:16.518321    3090 out.go:177] * [kubernetes-upgrade-067000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:21:16.525292    3090 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:21:16.529299    3090 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:21:16.525357    3090 notify.go:220] Checking for updates...
	I0610 07:21:16.535296    3090 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:21:16.538266    3090 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:21:16.541371    3090 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:21:16.544279    3090 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:21:16.547619    3090 config.go:182] Loaded profile config "cert-expiration-587000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:21:16.547689    3090 config.go:182] Loaded profile config "multinode-214000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:21:16.547725    3090 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:21:16.552266    3090 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 07:21:16.564274    3090 start.go:297] selected driver: qemu2
	I0610 07:21:16.564286    3090 start.go:875] validating driver "qemu2" against <nil>
	I0610 07:21:16.564293    3090 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:21:16.566261    3090 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 07:21:16.569310    3090 out.go:177] * Automatically selected the socket_vmnet network
	I0610 07:21:16.572324    3090 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 07:21:16.572345    3090 cni.go:84] Creating CNI manager for ""
	I0610 07:21:16.572352    3090 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0610 07:21:16.572358    3090 start_flags.go:319] config:
	{Name:kubernetes-upgrade-067000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-067000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:21:16.572456    3090 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:21:16.580280    3090 out.go:177] * Starting control plane node kubernetes-upgrade-067000 in cluster kubernetes-upgrade-067000
	I0610 07:21:16.584307    3090 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0610 07:21:16.584330    3090 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0610 07:21:16.584343    3090 cache.go:57] Caching tarball of preloaded images
	I0610 07:21:16.584448    3090 preload.go:174] Found /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 07:21:16.584457    3090 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0610 07:21:16.584515    3090 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/kubernetes-upgrade-067000/config.json ...
	I0610 07:21:16.584528    3090 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/kubernetes-upgrade-067000/config.json: {Name:mk3331a7a1b68ef83918f3c1c34563f7c94e836c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:21:16.584736    3090 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:21:16.584749    3090 start.go:364] acquiring machines lock for kubernetes-upgrade-067000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:21:16.584781    3090 start.go:368] acquired machines lock for "kubernetes-upgrade-067000" in 26.25µs
	I0610 07:21:16.584794    3090 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-067000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-067000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:21:16.584836    3090 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:21:16.593226    3090 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 07:21:16.610597    3090 start.go:159] libmachine.API.Create for "kubernetes-upgrade-067000" (driver="qemu2")
	I0610 07:21:16.610619    3090 client.go:168] LocalClient.Create starting
	I0610 07:21:16.610687    3090 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:21:16.610711    3090 main.go:141] libmachine: Decoding PEM data...
	I0610 07:21:16.610724    3090 main.go:141] libmachine: Parsing certificate...
	I0610 07:21:16.610758    3090 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:21:16.610775    3090 main.go:141] libmachine: Decoding PEM data...
	I0610 07:21:16.610782    3090 main.go:141] libmachine: Parsing certificate...
	I0610 07:21:16.611111    3090 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:21:16.716976    3090 main.go:141] libmachine: Creating SSH key...
	I0610 07:21:16.876732    3090 main.go:141] libmachine: Creating Disk image...
	I0610 07:21:16.876740    3090 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:21:16.876895    3090 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubernetes-upgrade-067000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubernetes-upgrade-067000/disk.qcow2
	I0610 07:21:16.885528    3090 main.go:141] libmachine: STDOUT: 
	I0610 07:21:16.885543    3090 main.go:141] libmachine: STDERR: 
	I0610 07:21:16.885593    3090 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubernetes-upgrade-067000/disk.qcow2 +20000M
	I0610 07:21:16.892682    3090 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:21:16.892703    3090 main.go:141] libmachine: STDERR: 
	I0610 07:21:16.892725    3090 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubernetes-upgrade-067000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubernetes-upgrade-067000/disk.qcow2
	I0610 07:21:16.892730    3090 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:21:16.892765    3090 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubernetes-upgrade-067000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubernetes-upgrade-067000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubernetes-upgrade-067000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:1a:2c:08:d2:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubernetes-upgrade-067000/disk.qcow2
	I0610 07:21:16.894305    3090 main.go:141] libmachine: STDOUT: 
	I0610 07:21:16.894317    3090 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:21:16.894337    3090 client.go:171] LocalClient.Create took 283.721708ms
	I0610 07:21:18.896438    3090 start.go:128] duration metric: createHost completed in 2.311658208s
	I0610 07:21:18.896509    3090 start.go:83] releasing machines lock for "kubernetes-upgrade-067000", held for 2.311791125s
	W0610 07:21:18.896601    3090 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:21:18.908811    3090 out.go:177] * Deleting "kubernetes-upgrade-067000" in qemu2 ...
	W0610 07:21:18.928606    3090 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:21:18.928631    3090 start.go:702] Will try again in 5 seconds ...
	I0610 07:21:23.930732    3090 start.go:364] acquiring machines lock for kubernetes-upgrade-067000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:21:23.931209    3090 start.go:368] acquired machines lock for "kubernetes-upgrade-067000" in 349.167µs
	I0610 07:21:23.931347    3090 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-067000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-067000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:21:23.931629    3090 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:21:23.941161    3090 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 07:21:23.989161    3090 start.go:159] libmachine.API.Create for "kubernetes-upgrade-067000" (driver="qemu2")
	I0610 07:21:23.989219    3090 client.go:168] LocalClient.Create starting
	I0610 07:21:23.989359    3090 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:21:23.989402    3090 main.go:141] libmachine: Decoding PEM data...
	I0610 07:21:23.989429    3090 main.go:141] libmachine: Parsing certificate...
	I0610 07:21:23.989529    3090 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:21:23.989560    3090 main.go:141] libmachine: Decoding PEM data...
	I0610 07:21:23.989578    3090 main.go:141] libmachine: Parsing certificate...
	I0610 07:21:23.990165    3090 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:21:24.352908    3090 main.go:141] libmachine: Creating SSH key...
	I0610 07:21:24.476775    3090 main.go:141] libmachine: Creating Disk image...
	I0610 07:21:24.476780    3090 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:21:24.476935    3090 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubernetes-upgrade-067000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubernetes-upgrade-067000/disk.qcow2
	I0610 07:21:24.489962    3090 main.go:141] libmachine: STDOUT: 
	I0610 07:21:24.489983    3090 main.go:141] libmachine: STDERR: 
	I0610 07:21:24.490039    3090 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubernetes-upgrade-067000/disk.qcow2 +20000M
	I0610 07:21:24.497310    3090 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:21:24.497322    3090 main.go:141] libmachine: STDERR: 
	I0610 07:21:24.497345    3090 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubernetes-upgrade-067000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubernetes-upgrade-067000/disk.qcow2
	I0610 07:21:24.497353    3090 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:21:24.497403    3090 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubernetes-upgrade-067000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubernetes-upgrade-067000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubernetes-upgrade-067000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:80:ed:bb:1b:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubernetes-upgrade-067000/disk.qcow2
	I0610 07:21:24.498921    3090 main.go:141] libmachine: STDOUT: 
	I0610 07:21:24.498942    3090 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:21:24.498955    3090 client.go:171] LocalClient.Create took 509.745916ms
	I0610 07:21:26.499404    3090 start.go:128] duration metric: createHost completed in 2.56778075s
	I0610 07:21:26.502355    3090 start.go:83] releasing machines lock for "kubernetes-upgrade-067000", held for 2.571197375s
	W0610 07:21:26.502692    3090 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-067000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-067000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:21:26.510201    3090 out.go:177] 
	W0610 07:21:26.514298    3090 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:21:26.514373    3090 out.go:239] * 
	* 
	W0610 07:21:26.516807    3090 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:21:26.525136    3090 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:236: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-067000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-067000
version_upgrade_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-067000 status --format={{.Host}}
version_upgrade_test.go:244: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-067000 status --format={{.Host}}: exit status 7 (34.591ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:246: status error: exit status 7 (may be ok)
version_upgrade_test.go:255: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-067000 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:255: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-067000 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.169115125s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-067000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-067000 in cluster kubernetes-upgrade-067000
	* Restarting existing qemu2 VM for "kubernetes-upgrade-067000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-067000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:21:26.700091    3113 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:21:26.700205    3113 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:21:26.700208    3113 out.go:309] Setting ErrFile to fd 2...
	I0610 07:21:26.700210    3113 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:21:26.700279    3113 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:21:26.701215    3113 out.go:303] Setting JSON to false
	I0610 07:21:26.716394    3113 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1256,"bootTime":1686405630,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:21:26.716475    3113 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:21:26.721337    3113 out.go:177] * [kubernetes-upgrade-067000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:21:26.724203    3113 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:21:26.728276    3113 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:21:26.724261    3113 notify.go:220] Checking for updates...
	I0610 07:21:26.735277    3113 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:21:26.738326    3113 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:21:26.741287    3113 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:21:26.744282    3113 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:21:26.747484    3113 config.go:182] Loaded profile config "kubernetes-upgrade-067000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0610 07:21:26.747722    3113 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:21:26.752315    3113 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 07:21:26.759240    3113 start.go:297] selected driver: qemu2
	I0610 07:21:26.759245    3113 start.go:875] validating driver "qemu2" against &{Name:kubernetes-upgrade-067000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubern
etesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-067000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:21:26.759338    3113 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:21:26.761227    3113 cni.go:84] Creating CNI manager for ""
	I0610 07:21:26.761244    3113 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 07:21:26.761250    3113 start_flags.go:319] config:
	{Name:kubernetes-upgrade-067000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:kubernetes-upgrade-067000 Nam
espace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:21:26.761330    3113 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:21:26.767235    3113 out.go:177] * Starting control plane node kubernetes-upgrade-067000 in cluster kubernetes-upgrade-067000
	I0610 07:21:26.771268    3113 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:21:26.771287    3113 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 07:21:26.771302    3113 cache.go:57] Caching tarball of preloaded images
	I0610 07:21:26.771358    3113 preload.go:174] Found /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 07:21:26.771363    3113 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 07:21:26.771419    3113 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/kubernetes-upgrade-067000/config.json ...
	I0610 07:21:26.771781    3113 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:21:26.771791    3113 start.go:364] acquiring machines lock for kubernetes-upgrade-067000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:21:26.771822    3113 start.go:368] acquired machines lock for "kubernetes-upgrade-067000" in 25.458µs
	I0610 07:21:26.771834    3113 start.go:96] Skipping create...Using existing machine configuration
	I0610 07:21:26.771839    3113 fix.go:55] fixHost starting: 
	I0610 07:21:26.771951    3113 fix.go:103] recreateIfNeeded on kubernetes-upgrade-067000: state=Stopped err=<nil>
	W0610 07:21:26.771960    3113 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 07:21:26.780271    3113 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-067000" ...
	I0610 07:21:26.784076    3113 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubernetes-upgrade-067000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubernetes-upgrade-067000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubernetes-upgrade-067000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:80:ed:bb:1b:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubernetes-upgrade-067000/disk.qcow2
	I0610 07:21:26.785807    3113 main.go:141] libmachine: STDOUT: 
	I0610 07:21:26.785825    3113 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:21:26.785854    3113 fix.go:57] fixHost completed within 14.016542ms
	I0610 07:21:26.785860    3113 start.go:83] releasing machines lock for "kubernetes-upgrade-067000", held for 14.0335ms
	W0610 07:21:26.785866    3113 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:21:26.785902    3113 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:21:26.785907    3113 start.go:702] Will try again in 5 seconds ...
	I0610 07:21:31.787970    3113 start.go:364] acquiring machines lock for kubernetes-upgrade-067000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:21:31.788422    3113 start.go:368] acquired machines lock for "kubernetes-upgrade-067000" in 353.208µs
	I0610 07:21:31.788601    3113 start.go:96] Skipping create...Using existing machine configuration
	I0610 07:21:31.788622    3113 fix.go:55] fixHost starting: 
	I0610 07:21:31.789384    3113 fix.go:103] recreateIfNeeded on kubernetes-upgrade-067000: state=Stopped err=<nil>
	W0610 07:21:31.789409    3113 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 07:21:31.797828    3113 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-067000" ...
	I0610 07:21:31.800971    3113 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubernetes-upgrade-067000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubernetes-upgrade-067000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubernetes-upgrade-067000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:80:ed:bb:1b:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubernetes-upgrade-067000/disk.qcow2
	I0610 07:21:31.810167    3113 main.go:141] libmachine: STDOUT: 
	I0610 07:21:31.810228    3113 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:21:31.810350    3113 fix.go:57] fixHost completed within 21.730125ms
	I0610 07:21:31.810374    3113 start.go:83] releasing machines lock for "kubernetes-upgrade-067000", held for 21.924167ms
	W0610 07:21:31.810614    3113 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-067000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-067000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:21:31.818817    3113 out.go:177] 
	W0610 07:21:31.821867    3113 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:21:31.821893    3113 out.go:239] * 
	* 
	W0610 07:21:31.824371    3113 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:21:31.832782    3113 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:257: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-067000 --memory=2200 --kubernetes-version=v1.27.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:260: (dbg) Run:  kubectl --context kubernetes-upgrade-067000 version --output=json
version_upgrade_test.go:260: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-067000 version --output=json: exit status 1 (64.959625ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-067000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:262: error running kubectl: exit status 1
panic.go:522: *** TestKubernetesUpgrade FAILED at 2023-06-10 07:21:31.909678 -0700 PDT m=+1100.225014251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-067000 -n kubernetes-upgrade-067000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-067000 -n kubernetes-upgrade-067000: exit status 7 (32.686042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-067000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-067000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-067000
--- FAIL: TestKubernetesUpgrade (15.57s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.43s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.30.1 on darwin (arm64)
- MINIKUBE_LOCATION=15074
- KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3416282824/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.43s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.02s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.30.1 on darwin (arm64)
- MINIKUBE_LOCATION=15074
- KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current747711430/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (138.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
version_upgrade_test.go:167: v1.6.2 release installation failed: bad response code: 404
--- FAIL: TestStoppedBinaryUpgrade/Setup (138.95s)

                                                
                                    
x
+
TestPause/serial/Start (10.22s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-191000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-191000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.152423417s)

                                                
                                                
-- stdout --
	* [pause-191000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node pause-191000 in cluster pause-191000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-191000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-191000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-191000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-191000 -n pause-191000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-191000 -n pause-191000: exit status 7 (69.924083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-191000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-817000 --driver=qemu2 
E0610 07:21:45.024518    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-817000 --driver=qemu2 : exit status 80 (9.804921292s)

                                                
                                                
-- stdout --
	* [NoKubernetes-817000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node NoKubernetes-817000 in cluster NoKubernetes-817000
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-817000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-817000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-817000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-817000 -n NoKubernetes-817000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-817000 -n NoKubernetes-817000: exit status 7 (68.555666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-817000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-817000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-817000 --no-kubernetes --driver=qemu2 : exit status 80 (5.396500125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-817000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-817000
	* Restarting existing qemu2 VM for "NoKubernetes-817000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-817000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-817000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-817000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-817000 -n NoKubernetes-817000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-817000 -n NoKubernetes-817000: exit status 7 (70.7435ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-817000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-817000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-817000 --no-kubernetes --driver=qemu2 : exit status 80 (5.393565167s)

                                                
                                                
-- stdout --
	* [NoKubernetes-817000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-817000
	* Restarting existing qemu2 VM for "NoKubernetes-817000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-817000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-817000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-817000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-817000 -n NoKubernetes-817000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-817000 -n NoKubernetes-817000: exit status 7 (67.370083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-817000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-817000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-817000 --driver=qemu2 : exit status 80 (5.392729792s)

                                                
                                                
-- stdout --
	* [NoKubernetes-817000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-817000
	* Restarting existing qemu2 VM for "NoKubernetes-817000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-817000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-817000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-817000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-817000 -n NoKubernetes-817000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-817000 -n NoKubernetes-817000: exit status 7 (70.143459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-817000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-176000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
E0610 07:22:12.734802    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/ingress-addon-legacy-433000/client.crt: no such file or directory
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-176000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.81766825s)

                                                
                                                
-- stdout --
	* [auto-176000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node auto-176000 in cluster auto-176000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-176000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:22:09.133382    3223 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:22:09.133513    3223 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:22:09.133517    3223 out.go:309] Setting ErrFile to fd 2...
	I0610 07:22:09.133519    3223 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:22:09.133584    3223 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:22:09.134655    3223 out.go:303] Setting JSON to false
	I0610 07:22:09.149703    3223 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1299,"bootTime":1686405630,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:22:09.149778    3223 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:22:09.153853    3223 out.go:177] * [auto-176000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:22:09.160742    3223 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:22:09.160807    3223 notify.go:220] Checking for updates...
	I0610 07:22:09.164576    3223 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:22:09.167773    3223 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:22:09.170734    3223 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:22:09.173760    3223 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:22:09.176679    3223 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:22:09.180046    3223 config.go:182] Loaded profile config "multinode-214000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:22:09.180101    3223 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:22:09.184727    3223 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 07:22:09.191713    3223 start.go:297] selected driver: qemu2
	I0610 07:22:09.191717    3223 start.go:875] validating driver "qemu2" against <nil>
	I0610 07:22:09.191723    3223 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:22:09.193641    3223 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 07:22:09.196704    3223 out.go:177] * Automatically selected the socket_vmnet network
	I0610 07:22:09.199717    3223 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 07:22:09.199737    3223 cni.go:84] Creating CNI manager for ""
	I0610 07:22:09.199747    3223 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 07:22:09.199751    3223 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 07:22:09.199758    3223 start_flags.go:319] config:
	{Name:auto-176000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:auto-176000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISoc
ket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:22:09.199876    3223 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:22:09.207592    3223 out.go:177] * Starting control plane node auto-176000 in cluster auto-176000
	I0610 07:22:09.211665    3223 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:22:09.211687    3223 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 07:22:09.211699    3223 cache.go:57] Caching tarball of preloaded images
	I0610 07:22:09.211781    3223 preload.go:174] Found /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 07:22:09.211792    3223 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 07:22:09.211847    3223 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/auto-176000/config.json ...
	I0610 07:22:09.211859    3223 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/auto-176000/config.json: {Name:mkcefd9771b9bcddf1586e6e3badf479f9a4aa95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:22:09.212049    3223 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:22:09.212059    3223 start.go:364] acquiring machines lock for auto-176000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:22:09.212089    3223 start.go:368] acquired machines lock for "auto-176000" in 24.334µs
	I0610 07:22:09.212100    3223 start.go:93] Provisioning new machine with config: &{Name:auto-176000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.27.2 ClusterName:auto-176000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:22:09.212128    3223 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:22:09.220700    3223 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 07:22:09.237375    3223 start.go:159] libmachine.API.Create for "auto-176000" (driver="qemu2")
	I0610 07:22:09.237399    3223 client.go:168] LocalClient.Create starting
	I0610 07:22:09.237455    3223 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:22:09.237475    3223 main.go:141] libmachine: Decoding PEM data...
	I0610 07:22:09.237485    3223 main.go:141] libmachine: Parsing certificate...
	I0610 07:22:09.237523    3223 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:22:09.237538    3223 main.go:141] libmachine: Decoding PEM data...
	I0610 07:22:09.237547    3223 main.go:141] libmachine: Parsing certificate...
	I0610 07:22:09.237841    3223 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:22:09.404210    3223 main.go:141] libmachine: Creating SSH key...
	I0610 07:22:09.508161    3223 main.go:141] libmachine: Creating Disk image...
	I0610 07:22:09.508167    3223 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:22:09.508313    3223 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/auto-176000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/auto-176000/disk.qcow2
	I0610 07:22:09.516949    3223 main.go:141] libmachine: STDOUT: 
	I0610 07:22:09.516962    3223 main.go:141] libmachine: STDERR: 
	I0610 07:22:09.517015    3223 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/auto-176000/disk.qcow2 +20000M
	I0610 07:22:09.524120    3223 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:22:09.524156    3223 main.go:141] libmachine: STDERR: 
	I0610 07:22:09.524173    3223 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/auto-176000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/auto-176000/disk.qcow2
	I0610 07:22:09.524187    3223 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:22:09.524229    3223 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/auto-176000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/auto-176000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/auto-176000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:e5:ae:2e:fb:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/auto-176000/disk.qcow2
	I0610 07:22:09.525818    3223 main.go:141] libmachine: STDOUT: 
	I0610 07:22:09.525836    3223 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:22:09.525863    3223 client.go:171] LocalClient.Create took 288.467208ms
	I0610 07:22:11.527992    3223 start.go:128] duration metric: createHost completed in 2.315921042s
	I0610 07:22:11.528046    3223 start.go:83] releasing machines lock for "auto-176000", held for 2.316020833s
	W0610 07:22:11.528129    3223 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:22:11.532700    3223 out.go:177] * Deleting "auto-176000" in qemu2 ...
	W0610 07:22:11.553462    3223 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:22:11.553490    3223 start.go:702] Will try again in 5 seconds ...
	I0610 07:22:16.555544    3223 start.go:364] acquiring machines lock for auto-176000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:22:16.556081    3223 start.go:368] acquired machines lock for "auto-176000" in 410µs
	I0610 07:22:16.556223    3223 start.go:93] Provisioning new machine with config: &{Name:auto-176000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.27.2 ClusterName:auto-176000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:22:16.556548    3223 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:22:16.566277    3223 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 07:22:16.614114    3223 start.go:159] libmachine.API.Create for "auto-176000" (driver="qemu2")
	I0610 07:22:16.614147    3223 client.go:168] LocalClient.Create starting
	I0610 07:22:16.614262    3223 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:22:16.614303    3223 main.go:141] libmachine: Decoding PEM data...
	I0610 07:22:16.614321    3223 main.go:141] libmachine: Parsing certificate...
	I0610 07:22:16.614406    3223 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:22:16.614433    3223 main.go:141] libmachine: Decoding PEM data...
	I0610 07:22:16.614453    3223 main.go:141] libmachine: Parsing certificate...
	I0610 07:22:16.614952    3223 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:22:16.742170    3223 main.go:141] libmachine: Creating SSH key...
	I0610 07:22:16.865747    3223 main.go:141] libmachine: Creating Disk image...
	I0610 07:22:16.865752    3223 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:22:16.865906    3223 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/auto-176000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/auto-176000/disk.qcow2
	I0610 07:22:16.874615    3223 main.go:141] libmachine: STDOUT: 
	I0610 07:22:16.874639    3223 main.go:141] libmachine: STDERR: 
	I0610 07:22:16.874703    3223 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/auto-176000/disk.qcow2 +20000M
	I0610 07:22:16.881854    3223 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:22:16.881866    3223 main.go:141] libmachine: STDERR: 
	I0610 07:22:16.881879    3223 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/auto-176000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/auto-176000/disk.qcow2
	I0610 07:22:16.881884    3223 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:22:16.881934    3223 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/auto-176000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/auto-176000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/auto-176000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:91:7d:2d:32:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/auto-176000/disk.qcow2
	I0610 07:22:16.883461    3223 main.go:141] libmachine: STDOUT: 
	I0610 07:22:16.883475    3223 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:22:16.883484    3223 client.go:171] LocalClient.Create took 269.341042ms
	I0610 07:22:18.885663    3223 start.go:128] duration metric: createHost completed in 2.329060417s
	I0610 07:22:18.885747    3223 start.go:83] releasing machines lock for "auto-176000", held for 2.329699625s
	W0610 07:22:18.886125    3223 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-176000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-176000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:22:18.893679    3223 out.go:177] 
	W0610 07:22:18.897846    3223 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:22:18.897869    3223 out.go:239] * 
	* 
	W0610 07:22:18.900841    3223 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:22:18.909543    3223 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-176000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-176000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.822678208s)

                                                
                                                
-- stdout --
	* [kindnet-176000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kindnet-176000 in cluster kindnet-176000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-176000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:22:21.023091    3332 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:22:21.023223    3332 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:22:21.023226    3332 out.go:309] Setting ErrFile to fd 2...
	I0610 07:22:21.023229    3332 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:22:21.023296    3332 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:22:21.024297    3332 out.go:303] Setting JSON to false
	I0610 07:22:21.039245    3332 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1311,"bootTime":1686405630,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:22:21.039302    3332 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:22:21.044005    3332 out.go:177] * [kindnet-176000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:22:21.051828    3332 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:22:21.051895    3332 notify.go:220] Checking for updates...
	I0610 07:22:21.058921    3332 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:22:21.060295    3332 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:22:21.062968    3332 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:22:21.065956    3332 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:22:21.068965    3332 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:22:21.072256    3332 config.go:182] Loaded profile config "multinode-214000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:22:21.072303    3332 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:22:21.076961    3332 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 07:22:21.084009    3332 start.go:297] selected driver: qemu2
	I0610 07:22:21.084015    3332 start.go:875] validating driver "qemu2" against <nil>
	I0610 07:22:21.084024    3332 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:22:21.087027    3332 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 07:22:21.089976    3332 out.go:177] * Automatically selected the socket_vmnet network
	I0610 07:22:21.093059    3332 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 07:22:21.093086    3332 cni.go:84] Creating CNI manager for "kindnet"
	I0610 07:22:21.093090    3332 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0610 07:22:21.093102    3332 start_flags.go:319] config:
	{Name:kindnet-176000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:kindnet-176000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:22:21.093192    3332 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:22:21.100961    3332 out.go:177] * Starting control plane node kindnet-176000 in cluster kindnet-176000
	I0610 07:22:21.103905    3332 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:22:21.103929    3332 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 07:22:21.103946    3332 cache.go:57] Caching tarball of preloaded images
	I0610 07:22:21.104010    3332 preload.go:174] Found /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 07:22:21.104022    3332 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 07:22:21.104082    3332 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/kindnet-176000/config.json ...
	I0610 07:22:21.104094    3332 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/kindnet-176000/config.json: {Name:mk540f7a810fc6e2fd166ee6fe7ea3f1de6860b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:22:21.104288    3332 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:22:21.104299    3332 start.go:364] acquiring machines lock for kindnet-176000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:22:21.104328    3332 start.go:368] acquired machines lock for "kindnet-176000" in 24.458µs
	I0610 07:22:21.104340    3332 start.go:93] Provisioning new machine with config: &{Name:kindnet-176000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:kindnet-176000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:22:21.104367    3332 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:22:21.111896    3332 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 07:22:21.128608    3332 start.go:159] libmachine.API.Create for "kindnet-176000" (driver="qemu2")
	I0610 07:22:21.128624    3332 client.go:168] LocalClient.Create starting
	I0610 07:22:21.128690    3332 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:22:21.128710    3332 main.go:141] libmachine: Decoding PEM data...
	I0610 07:22:21.128727    3332 main.go:141] libmachine: Parsing certificate...
	I0610 07:22:21.128780    3332 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:22:21.128796    3332 main.go:141] libmachine: Decoding PEM data...
	I0610 07:22:21.128807    3332 main.go:141] libmachine: Parsing certificate...
	I0610 07:22:21.129145    3332 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:22:21.242576    3332 main.go:141] libmachine: Creating SSH key...
	I0610 07:22:21.415013    3332 main.go:141] libmachine: Creating Disk image...
	I0610 07:22:21.415020    3332 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:22:21.415181    3332 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kindnet-176000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kindnet-176000/disk.qcow2
	I0610 07:22:21.424196    3332 main.go:141] libmachine: STDOUT: 
	I0610 07:22:21.424225    3332 main.go:141] libmachine: STDERR: 
	I0610 07:22:21.424299    3332 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kindnet-176000/disk.qcow2 +20000M
	I0610 07:22:21.431397    3332 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:22:21.431409    3332 main.go:141] libmachine: STDERR: 
	I0610 07:22:21.431432    3332 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kindnet-176000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kindnet-176000/disk.qcow2
	I0610 07:22:21.431443    3332 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:22:21.431476    3332 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kindnet-176000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/kindnet-176000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kindnet-176000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:84:4b:0a:ca:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kindnet-176000/disk.qcow2
	I0610 07:22:21.433044    3332 main.go:141] libmachine: STDOUT: 
	I0610 07:22:21.433055    3332 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:22:21.433073    3332 client.go:171] LocalClient.Create took 304.455709ms
	I0610 07:22:23.435223    3332 start.go:128] duration metric: createHost completed in 2.330912625s
	I0610 07:22:23.435274    3332 start.go:83] releasing machines lock for "kindnet-176000", held for 2.33100725s
	W0610 07:22:23.435332    3332 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:22:23.446514    3332 out.go:177] * Deleting "kindnet-176000" in qemu2 ...
	W0610 07:22:23.467307    3332 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:22:23.467334    3332 start.go:702] Will try again in 5 seconds ...
	I0610 07:22:28.469559    3332 start.go:364] acquiring machines lock for kindnet-176000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:22:28.470089    3332 start.go:368] acquired machines lock for "kindnet-176000" in 426.417µs
	I0610 07:22:28.470206    3332 start.go:93] Provisioning new machine with config: &{Name:kindnet-176000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:kindnet-176000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:22:28.470581    3332 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:22:28.481117    3332 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 07:22:28.528657    3332 start.go:159] libmachine.API.Create for "kindnet-176000" (driver="qemu2")
	I0610 07:22:28.528711    3332 client.go:168] LocalClient.Create starting
	I0610 07:22:28.528830    3332 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:22:28.528880    3332 main.go:141] libmachine: Decoding PEM data...
	I0610 07:22:28.528895    3332 main.go:141] libmachine: Parsing certificate...
	I0610 07:22:28.528962    3332 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:22:28.528988    3332 main.go:141] libmachine: Decoding PEM data...
	I0610 07:22:28.529003    3332 main.go:141] libmachine: Parsing certificate...
	I0610 07:22:28.529527    3332 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:22:28.654433    3332 main.go:141] libmachine: Creating SSH key...
	I0610 07:22:28.759471    3332 main.go:141] libmachine: Creating Disk image...
	I0610 07:22:28.759479    3332 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:22:28.759636    3332 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kindnet-176000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kindnet-176000/disk.qcow2
	I0610 07:22:28.768284    3332 main.go:141] libmachine: STDOUT: 
	I0610 07:22:28.768307    3332 main.go:141] libmachine: STDERR: 
	I0610 07:22:28.768377    3332 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kindnet-176000/disk.qcow2 +20000M
	I0610 07:22:28.775499    3332 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:22:28.775517    3332 main.go:141] libmachine: STDERR: 
	I0610 07:22:28.775531    3332 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kindnet-176000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kindnet-176000/disk.qcow2
	I0610 07:22:28.775548    3332 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:22:28.775599    3332 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kindnet-176000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/kindnet-176000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kindnet-176000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:8f:78:14:3e:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kindnet-176000/disk.qcow2
	I0610 07:22:28.777093    3332 main.go:141] libmachine: STDOUT: 
	I0610 07:22:28.777111    3332 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:22:28.777125    3332 client.go:171] LocalClient.Create took 248.418ms
	I0610 07:22:30.779232    3332 start.go:128] duration metric: createHost completed in 2.308698584s
	I0610 07:22:30.779342    3332 start.go:83] releasing machines lock for "kindnet-176000", held for 2.309252042s
	W0610 07:22:30.779816    3332 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-176000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-176000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:22:30.789240    3332 out.go:177] 
	W0610 07:22:30.793320    3332 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:22:30.793344    3332 out.go:239] * 
	* 
	W0610 07:22:30.795704    3332 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:22:30.805339    3332 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-176000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-176000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.93845625s)

                                                
                                                
-- stdout --
	* [flannel-176000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node flannel-176000 in cluster flannel-176000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-176000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:22:33.033995    3445 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:22:33.034127    3445 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:22:33.034130    3445 out.go:309] Setting ErrFile to fd 2...
	I0610 07:22:33.034133    3445 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:22:33.034203    3445 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:22:33.035195    3445 out.go:303] Setting JSON to false
	I0610 07:22:33.050109    3445 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1323,"bootTime":1686405630,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:22:33.050183    3445 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:22:33.058115    3445 out.go:177] * [flannel-176000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:22:33.061170    3445 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:22:33.065018    3445 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:22:33.061223    3445 notify.go:220] Checking for updates...
	I0610 07:22:33.068077    3445 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:22:33.071157    3445 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:22:33.074089    3445 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:22:33.077126    3445 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:22:33.080473    3445 config.go:182] Loaded profile config "multinode-214000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:22:33.080721    3445 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:22:33.084110    3445 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 07:22:33.092142    3445 start.go:297] selected driver: qemu2
	I0610 07:22:33.092146    3445 start.go:875] validating driver "qemu2" against <nil>
	I0610 07:22:33.092155    3445 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:22:33.094211    3445 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 07:22:33.097055    3445 out.go:177] * Automatically selected the socket_vmnet network
	I0610 07:22:33.100185    3445 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 07:22:33.100214    3445 cni.go:84] Creating CNI manager for "flannel"
	I0610 07:22:33.100218    3445 start_flags.go:314] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0610 07:22:33.100228    3445 start_flags.go:319] config:
	{Name:flannel-176000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:flannel-176000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:22:33.100329    3445 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:22:33.104110    3445 out.go:177] * Starting control plane node flannel-176000 in cluster flannel-176000
	I0610 07:22:33.112111    3445 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:22:33.112139    3445 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 07:22:33.112156    3445 cache.go:57] Caching tarball of preloaded images
	I0610 07:22:33.112220    3445 preload.go:174] Found /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 07:22:33.112226    3445 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 07:22:33.112286    3445 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/flannel-176000/config.json ...
	I0610 07:22:33.112298    3445 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/flannel-176000/config.json: {Name:mk60ec39b0771cd1d46ad7387a66450f97b478fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:22:33.112478    3445 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:22:33.112488    3445 start.go:364] acquiring machines lock for flannel-176000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:22:33.112516    3445 start.go:368] acquired machines lock for "flannel-176000" in 22.375µs
	I0610 07:22:33.112527    3445 start.go:93] Provisioning new machine with config: &{Name:flannel-176000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:flannel-176000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:22:33.112560    3445 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:22:33.121047    3445 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 07:22:33.137169    3445 start.go:159] libmachine.API.Create for "flannel-176000" (driver="qemu2")
	I0610 07:22:33.137201    3445 client.go:168] LocalClient.Create starting
	I0610 07:22:33.137269    3445 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:22:33.137290    3445 main.go:141] libmachine: Decoding PEM data...
	I0610 07:22:33.137303    3445 main.go:141] libmachine: Parsing certificate...
	I0610 07:22:33.137349    3445 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:22:33.137368    3445 main.go:141] libmachine: Decoding PEM data...
	I0610 07:22:33.137375    3445 main.go:141] libmachine: Parsing certificate...
	I0610 07:22:33.137694    3445 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:22:33.260379    3445 main.go:141] libmachine: Creating SSH key...
	I0610 07:22:33.583204    3445 main.go:141] libmachine: Creating Disk image...
	I0610 07:22:33.583213    3445 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:22:33.583376    3445 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/flannel-176000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/flannel-176000/disk.qcow2
	I0610 07:22:33.592138    3445 main.go:141] libmachine: STDOUT: 
	I0610 07:22:33.592156    3445 main.go:141] libmachine: STDERR: 
	I0610 07:22:33.592211    3445 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/flannel-176000/disk.qcow2 +20000M
	I0610 07:22:33.599393    3445 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:22:33.599407    3445 main.go:141] libmachine: STDERR: 
	I0610 07:22:33.599424    3445 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/flannel-176000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/flannel-176000/disk.qcow2
	I0610 07:22:33.599429    3445 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:22:33.599464    3445 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/flannel-176000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/flannel-176000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/flannel-176000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:0d:df:13:ee:91 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/flannel-176000/disk.qcow2
	I0610 07:22:33.600979    3445 main.go:141] libmachine: STDOUT: 
	I0610 07:22:33.600993    3445 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:22:33.601011    3445 client.go:171] LocalClient.Create took 463.818375ms
	I0610 07:22:35.603189    3445 start.go:128] duration metric: createHost completed in 2.490672958s
	I0610 07:22:35.603253    3445 start.go:83] releasing machines lock for "flannel-176000", held for 2.490807875s
	W0610 07:22:35.603322    3445 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:22:35.613620    3445 out.go:177] * Deleting "flannel-176000" in qemu2 ...
	W0610 07:22:35.632029    3445 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:22:35.632059    3445 start.go:702] Will try again in 5 seconds ...
	I0610 07:22:40.634190    3445 start.go:364] acquiring machines lock for flannel-176000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:22:40.634894    3445 start.go:368] acquired machines lock for "flannel-176000" in 587.25µs
	I0610 07:22:40.635013    3445 start.go:93] Provisioning new machine with config: &{Name:flannel-176000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:flannel-176000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:22:40.635323    3445 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:22:40.640086    3445 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 07:22:40.687278    3445 start.go:159] libmachine.API.Create for "flannel-176000" (driver="qemu2")
	I0610 07:22:40.687327    3445 client.go:168] LocalClient.Create starting
	I0610 07:22:40.687484    3445 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:22:40.687530    3445 main.go:141] libmachine: Decoding PEM data...
	I0610 07:22:40.687557    3445 main.go:141] libmachine: Parsing certificate...
	I0610 07:22:40.687641    3445 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:22:40.687670    3445 main.go:141] libmachine: Decoding PEM data...
	I0610 07:22:40.687686    3445 main.go:141] libmachine: Parsing certificate...
	I0610 07:22:40.688239    3445 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:22:40.815941    3445 main.go:141] libmachine: Creating SSH key...
	I0610 07:22:40.888944    3445 main.go:141] libmachine: Creating Disk image...
	I0610 07:22:40.888949    3445 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:22:40.889100    3445 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/flannel-176000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/flannel-176000/disk.qcow2
	I0610 07:22:40.897806    3445 main.go:141] libmachine: STDOUT: 
	I0610 07:22:40.897824    3445 main.go:141] libmachine: STDERR: 
	I0610 07:22:40.897883    3445 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/flannel-176000/disk.qcow2 +20000M
	I0610 07:22:40.905057    3445 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:22:40.905069    3445 main.go:141] libmachine: STDERR: 
	I0610 07:22:40.905089    3445 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/flannel-176000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/flannel-176000/disk.qcow2
	I0610 07:22:40.905094    3445 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:22:40.905128    3445 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/flannel-176000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/flannel-176000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/flannel-176000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:45:75:d3:d5:47 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/flannel-176000/disk.qcow2
	I0610 07:22:40.906603    3445 main.go:141] libmachine: STDOUT: 
	I0610 07:22:40.906618    3445 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:22:40.906630    3445 client.go:171] LocalClient.Create took 219.305291ms
	I0610 07:22:42.908742    3445 start.go:128] duration metric: createHost completed in 2.273467666s
	I0610 07:22:42.908789    3445 start.go:83] releasing machines lock for "flannel-176000", held for 2.27394225s
	W0610 07:22:42.909195    3445 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-176000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-176000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:22:42.917298    3445 out.go:177] 
	W0610 07:22:42.921789    3445 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:22:42.921814    3445 out.go:239] * 
	* 
	W0610 07:22:42.924249    3445 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:22:42.931780    3445 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-176000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-176000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.735009333s)

                                                
                                                
-- stdout --
	* [enable-default-cni-176000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node enable-default-cni-176000 in cluster enable-default-cni-176000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-176000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:22:45.273501    3562 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:22:45.273665    3562 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:22:45.273669    3562 out.go:309] Setting ErrFile to fd 2...
	I0610 07:22:45.273671    3562 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:22:45.273738    3562 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:22:45.274754    3562 out.go:303] Setting JSON to false
	I0610 07:22:45.289848    3562 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1335,"bootTime":1686405630,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:22:45.289926    3562 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:22:45.296709    3562 out.go:177] * [enable-default-cni-176000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:22:45.300657    3562 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:22:45.304674    3562 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:22:45.300703    3562 notify.go:220] Checking for updates...
	I0610 07:22:45.310666    3562 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:22:45.313719    3562 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:22:45.316721    3562 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:22:45.319646    3562 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:22:45.323005    3562 config.go:182] Loaded profile config "multinode-214000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:22:45.323044    3562 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:22:45.327695    3562 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 07:22:45.334681    3562 start.go:297] selected driver: qemu2
	I0610 07:22:45.334685    3562 start.go:875] validating driver "qemu2" against <nil>
	I0610 07:22:45.334691    3562 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:22:45.336635    3562 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 07:22:45.339690    3562 out.go:177] * Automatically selected the socket_vmnet network
	E0610 07:22:45.342798    3562 start_flags.go:453] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0610 07:22:45.342811    3562 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 07:22:45.342831    3562 cni.go:84] Creating CNI manager for "bridge"
	I0610 07:22:45.342837    3562 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 07:22:45.342850    3562 start_flags.go:319] config:
	{Name:enable-default-cni-176000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:enable-default-cni-176000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Sta
ticIP:}
	I0610 07:22:45.342934    3562 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:22:45.350660    3562 out.go:177] * Starting control plane node enable-default-cni-176000 in cluster enable-default-cni-176000
	I0610 07:22:45.354663    3562 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:22:45.354689    3562 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 07:22:45.354700    3562 cache.go:57] Caching tarball of preloaded images
	I0610 07:22:45.354757    3562 preload.go:174] Found /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 07:22:45.354762    3562 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 07:22:45.354830    3562 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/enable-default-cni-176000/config.json ...
	I0610 07:22:45.354842    3562 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/enable-default-cni-176000/config.json: {Name:mk12b50370af5c22c257ebde141b218cd946cd22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:22:45.355037    3562 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:22:45.355051    3562 start.go:364] acquiring machines lock for enable-default-cni-176000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:22:45.355080    3562 start.go:368] acquired machines lock for "enable-default-cni-176000" in 24.417µs
	I0610 07:22:45.355092    3562 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-176000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:enable-default-cni-176000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:22:45.355115    3562 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:22:45.360670    3562 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 07:22:45.377113    3562 start.go:159] libmachine.API.Create for "enable-default-cni-176000" (driver="qemu2")
	I0610 07:22:45.377139    3562 client.go:168] LocalClient.Create starting
	I0610 07:22:45.377193    3562 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:22:45.377213    3562 main.go:141] libmachine: Decoding PEM data...
	I0610 07:22:45.377223    3562 main.go:141] libmachine: Parsing certificate...
	I0610 07:22:45.377273    3562 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:22:45.377290    3562 main.go:141] libmachine: Decoding PEM data...
	I0610 07:22:45.377297    3562 main.go:141] libmachine: Parsing certificate...
	I0610 07:22:45.377627    3562 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:22:45.487596    3562 main.go:141] libmachine: Creating SSH key...
	I0610 07:22:45.592035    3562 main.go:141] libmachine: Creating Disk image...
	I0610 07:22:45.592041    3562 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:22:45.592178    3562 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/enable-default-cni-176000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/enable-default-cni-176000/disk.qcow2
	I0610 07:22:45.600835    3562 main.go:141] libmachine: STDOUT: 
	I0610 07:22:45.600849    3562 main.go:141] libmachine: STDERR: 
	I0610 07:22:45.600909    3562 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/enable-default-cni-176000/disk.qcow2 +20000M
	I0610 07:22:45.608007    3562 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:22:45.608025    3562 main.go:141] libmachine: STDERR: 
	I0610 07:22:45.608048    3562 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/enable-default-cni-176000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/enable-default-cni-176000/disk.qcow2
	I0610 07:22:45.608053    3562 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:22:45.608090    3562 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/enable-default-cni-176000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/enable-default-cni-176000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/enable-default-cni-176000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:75:c3:2f:be:f4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/enable-default-cni-176000/disk.qcow2
	I0610 07:22:45.609597    3562 main.go:141] libmachine: STDOUT: 
	I0610 07:22:45.609610    3562 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:22:45.609628    3562 client.go:171] LocalClient.Create took 232.490166ms
	I0610 07:22:47.611729    3562 start.go:128] duration metric: createHost completed in 2.256669417s
	I0610 07:22:47.611795    3562 start.go:83] releasing machines lock for "enable-default-cni-176000", held for 2.25677675s
	W0610 07:22:47.611855    3562 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:22:47.624145    3562 out.go:177] * Deleting "enable-default-cni-176000" in qemu2 ...
	W0610 07:22:47.644754    3562 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:22:47.644783    3562 start.go:702] Will try again in 5 seconds ...
	I0610 07:22:52.646907    3562 start.go:364] acquiring machines lock for enable-default-cni-176000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:22:52.647370    3562 start.go:368] acquired machines lock for "enable-default-cni-176000" in 372.5µs
	I0610 07:22:52.647501    3562 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-176000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:enable-default-cni-176000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:22:52.647796    3562 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:22:52.657294    3562 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 07:22:52.704395    3562 start.go:159] libmachine.API.Create for "enable-default-cni-176000" (driver="qemu2")
	I0610 07:22:52.704430    3562 client.go:168] LocalClient.Create starting
	I0610 07:22:52.704560    3562 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:22:52.704612    3562 main.go:141] libmachine: Decoding PEM data...
	I0610 07:22:52.704638    3562 main.go:141] libmachine: Parsing certificate...
	I0610 07:22:52.704725    3562 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:22:52.704764    3562 main.go:141] libmachine: Decoding PEM data...
	I0610 07:22:52.704789    3562 main.go:141] libmachine: Parsing certificate...
	I0610 07:22:52.705385    3562 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:22:52.836143    3562 main.go:141] libmachine: Creating SSH key...
	I0610 07:22:52.920110    3562 main.go:141] libmachine: Creating Disk image...
	I0610 07:22:52.920116    3562 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:22:52.920278    3562 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/enable-default-cni-176000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/enable-default-cni-176000/disk.qcow2
	I0610 07:22:52.928908    3562 main.go:141] libmachine: STDOUT: 
	I0610 07:22:52.928922    3562 main.go:141] libmachine: STDERR: 
	I0610 07:22:52.928972    3562 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/enable-default-cni-176000/disk.qcow2 +20000M
	I0610 07:22:52.936032    3562 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:22:52.936045    3562 main.go:141] libmachine: STDERR: 
	I0610 07:22:52.936059    3562 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/enable-default-cni-176000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/enable-default-cni-176000/disk.qcow2
	I0610 07:22:52.936068    3562 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:22:52.936113    3562 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/enable-default-cni-176000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/enable-default-cni-176000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/enable-default-cni-176000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:f7:eb:91:e1:41 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/enable-default-cni-176000/disk.qcow2
	I0610 07:22:52.937640    3562 main.go:141] libmachine: STDOUT: 
	I0610 07:22:52.937654    3562 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:22:52.937666    3562 client.go:171] LocalClient.Create took 233.239166ms
	I0610 07:22:54.939813    3562 start.go:128] duration metric: createHost completed in 2.292036708s
	I0610 07:22:54.939908    3562 start.go:83] releasing machines lock for "enable-default-cni-176000", held for 2.2925855s
	W0610 07:22:54.940428    3562 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-176000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-176000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:22:54.951185    3562 out.go:177] 
	W0610 07:22:54.955335    3562 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:22:54.955385    3562 out.go:239] * 
	* 
	W0610 07:22:54.958325    3562 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:22:54.967273    3562 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-176000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-176000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.696365208s)

                                                
                                                
-- stdout --
	* [bridge-176000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node bridge-176000 in cluster bridge-176000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-176000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:22:57.156668    3671 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:22:57.156834    3671 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:22:57.156836    3671 out.go:309] Setting ErrFile to fd 2...
	I0610 07:22:57.156839    3671 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:22:57.156907    3671 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:22:57.157920    3671 out.go:303] Setting JSON to false
	I0610 07:22:57.173038    3671 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1347,"bootTime":1686405630,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:22:57.173111    3671 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:22:57.180263    3671 out.go:177] * [bridge-176000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:22:57.184184    3671 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:22:57.184245    3671 notify.go:220] Checking for updates...
	I0610 07:22:57.191177    3671 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:22:57.192552    3671 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:22:57.195181    3671 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:22:57.198211    3671 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:22:57.201205    3671 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:22:57.204984    3671 config.go:182] Loaded profile config "multinode-214000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:22:57.205040    3671 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:22:57.209210    3671 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 07:22:57.216194    3671 start.go:297] selected driver: qemu2
	I0610 07:22:57.216199    3671 start.go:875] validating driver "qemu2" against <nil>
	I0610 07:22:57.216206    3671 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:22:57.218205    3671 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 07:22:57.221211    3671 out.go:177] * Automatically selected the socket_vmnet network
	I0610 07:22:57.224287    3671 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 07:22:57.224309    3671 cni.go:84] Creating CNI manager for "bridge"
	I0610 07:22:57.224313    3671 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 07:22:57.224326    3671 start_flags.go:319] config:
	{Name:bridge-176000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:bridge-176000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:22:57.224419    3671 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:22:57.228221    3671 out.go:177] * Starting control plane node bridge-176000 in cluster bridge-176000
	I0610 07:22:57.235210    3671 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:22:57.235238    3671 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 07:22:57.235255    3671 cache.go:57] Caching tarball of preloaded images
	I0610 07:22:57.235346    3671 preload.go:174] Found /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 07:22:57.235353    3671 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 07:22:57.235414    3671 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/bridge-176000/config.json ...
	I0610 07:22:57.235431    3671 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/bridge-176000/config.json: {Name:mk38ce05f1bb3f78f2c34fad04edafe9a66417b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:22:57.235638    3671 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:22:57.235651    3671 start.go:364] acquiring machines lock for bridge-176000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:22:57.235685    3671 start.go:368] acquired machines lock for "bridge-176000" in 26.792µs
	I0610 07:22:57.235697    3671 start.go:93] Provisioning new machine with config: &{Name:bridge-176000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:bridge-176000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:22:57.235725    3671 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:22:57.244113    3671 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 07:22:57.260283    3671 start.go:159] libmachine.API.Create for "bridge-176000" (driver="qemu2")
	I0610 07:22:57.260309    3671 client.go:168] LocalClient.Create starting
	I0610 07:22:57.260372    3671 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:22:57.260391    3671 main.go:141] libmachine: Decoding PEM data...
	I0610 07:22:57.260402    3671 main.go:141] libmachine: Parsing certificate...
	I0610 07:22:57.260453    3671 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:22:57.260468    3671 main.go:141] libmachine: Decoding PEM data...
	I0610 07:22:57.260476    3671 main.go:141] libmachine: Parsing certificate...
	I0610 07:22:57.260813    3671 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:22:57.374305    3671 main.go:141] libmachine: Creating SSH key...
	I0610 07:22:57.447748    3671 main.go:141] libmachine: Creating Disk image...
	I0610 07:22:57.447753    3671 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:22:57.447902    3671 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/bridge-176000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/bridge-176000/disk.qcow2
	I0610 07:22:57.456352    3671 main.go:141] libmachine: STDOUT: 
	I0610 07:22:57.456367    3671 main.go:141] libmachine: STDERR: 
	I0610 07:22:57.456412    3671 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/bridge-176000/disk.qcow2 +20000M
	I0610 07:22:57.463526    3671 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:22:57.463537    3671 main.go:141] libmachine: STDERR: 
	I0610 07:22:57.463551    3671 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/bridge-176000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/bridge-176000/disk.qcow2
	I0610 07:22:57.463564    3671 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:22:57.463596    3671 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/bridge-176000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/bridge-176000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/bridge-176000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:66:c8:28:aa:53 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/bridge-176000/disk.qcow2
	I0610 07:22:57.465104    3671 main.go:141] libmachine: STDOUT: 
	I0610 07:22:57.465118    3671 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:22:57.465134    3671 client.go:171] LocalClient.Create took 204.825792ms
	I0610 07:22:59.467247    3671 start.go:128] duration metric: createHost completed in 2.231576125s
	I0610 07:22:59.467333    3671 start.go:83] releasing machines lock for "bridge-176000", held for 2.231687208s
	W0610 07:22:59.467388    3671 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:22:59.474689    3671 out.go:177] * Deleting "bridge-176000" in qemu2 ...
	W0610 07:22:59.498536    3671 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:22:59.498565    3671 start.go:702] Will try again in 5 seconds ...
	I0610 07:23:04.500635    3671 start.go:364] acquiring machines lock for bridge-176000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:23:04.501221    3671 start.go:368] acquired machines lock for "bridge-176000" in 481.458µs
	I0610 07:23:04.501346    3671 start.go:93] Provisioning new machine with config: &{Name:bridge-176000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:bridge-176000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:23:04.501699    3671 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:23:04.510794    3671 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 07:23:04.555753    3671 start.go:159] libmachine.API.Create for "bridge-176000" (driver="qemu2")
	I0610 07:23:04.555805    3671 client.go:168] LocalClient.Create starting
	I0610 07:23:04.555960    3671 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:23:04.556007    3671 main.go:141] libmachine: Decoding PEM data...
	I0610 07:23:04.556027    3671 main.go:141] libmachine: Parsing certificate...
	I0610 07:23:04.556101    3671 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:23:04.556129    3671 main.go:141] libmachine: Decoding PEM data...
	I0610 07:23:04.556141    3671 main.go:141] libmachine: Parsing certificate...
	I0610 07:23:04.556683    3671 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:23:04.679203    3671 main.go:141] libmachine: Creating SSH key...
	I0610 07:23:04.767211    3671 main.go:141] libmachine: Creating Disk image...
	I0610 07:23:04.767217    3671 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:23:04.767358    3671 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/bridge-176000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/bridge-176000/disk.qcow2
	I0610 07:23:04.775671    3671 main.go:141] libmachine: STDOUT: 
	I0610 07:23:04.775689    3671 main.go:141] libmachine: STDERR: 
	I0610 07:23:04.775745    3671 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/bridge-176000/disk.qcow2 +20000M
	I0610 07:23:04.782897    3671 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:23:04.782914    3671 main.go:141] libmachine: STDERR: 
	I0610 07:23:04.782926    3671 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/bridge-176000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/bridge-176000/disk.qcow2
	I0610 07:23:04.782931    3671 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:23:04.782965    3671 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/bridge-176000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/bridge-176000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/bridge-176000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:7b:10:bb:7c:51 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/bridge-176000/disk.qcow2
	I0610 07:23:04.784478    3671 main.go:141] libmachine: STDOUT: 
	I0610 07:23:04.784490    3671 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:23:04.784503    3671 client.go:171] LocalClient.Create took 228.69625ms
	I0610 07:23:06.786629    3671 start.go:128] duration metric: createHost completed in 2.284979375s
	I0610 07:23:06.786685    3671 start.go:83] releasing machines lock for "bridge-176000", held for 2.285514375s
	W0610 07:23:06.787140    3671 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-176000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-176000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:23:06.796852    3671 out.go:177] 
	W0610 07:23:06.800827    3671 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:23:06.800850    3671 out.go:239] * 
	* 
	W0610 07:23:06.803321    3671 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:23:06.811836    3671 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-176000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
E0610 07:23:14.342890    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/functional-922000/client.crt: no such file or directory
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-176000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.932913125s)

                                                
                                                
-- stdout --
	* [kubenet-176000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node kubenet-176000 in cluster kubenet-176000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-176000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:23:08.999984    3780 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:23:09.000103    3780 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:23:09.000105    3780 out.go:309] Setting ErrFile to fd 2...
	I0610 07:23:09.000108    3780 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:23:09.000179    3780 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:23:09.001170    3780 out.go:303] Setting JSON to false
	I0610 07:23:09.016245    3780 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1359,"bootTime":1686405630,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:23:09.016313    3780 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:23:09.021088    3780 out.go:177] * [kubenet-176000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:23:09.029032    3780 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:23:09.029068    3780 notify.go:220] Checking for updates...
	I0610 07:23:09.033086    3780 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:23:09.035970    3780 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:23:09.039036    3780 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:23:09.042019    3780 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:23:09.044947    3780 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:23:09.048770    3780 config.go:182] Loaded profile config "multinode-214000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:23:09.048820    3780 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:23:09.053010    3780 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 07:23:09.059994    3780 start.go:297] selected driver: qemu2
	I0610 07:23:09.060000    3780 start.go:875] validating driver "qemu2" against <nil>
	I0610 07:23:09.060012    3780 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:23:09.062026    3780 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 07:23:09.065042    3780 out.go:177] * Automatically selected the socket_vmnet network
	I0610 07:23:09.068075    3780 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 07:23:09.068090    3780 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0610 07:23:09.068093    3780 start_flags.go:319] config:
	{Name:kubenet-176000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:kubenet-176000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:23:09.068176    3780 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:23:09.072048    3780 out.go:177] * Starting control plane node kubenet-176000 in cluster kubenet-176000
	I0610 07:23:09.079990    3780 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:23:09.080012    3780 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 07:23:09.080029    3780 cache.go:57] Caching tarball of preloaded images
	I0610 07:23:09.080104    3780 preload.go:174] Found /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 07:23:09.080109    3780 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 07:23:09.080180    3780 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/kubenet-176000/config.json ...
	I0610 07:23:09.080196    3780 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/kubenet-176000/config.json: {Name:mk7964135a57c40f67ada391728bd25652382701 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:23:09.080390    3780 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:23:09.080401    3780 start.go:364] acquiring machines lock for kubenet-176000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:23:09.080431    3780 start.go:368] acquired machines lock for "kubenet-176000" in 24.708µs
	I0610 07:23:09.080442    3780 start.go:93] Provisioning new machine with config: &{Name:kubenet-176000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:kubenet-176000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:23:09.080469    3780 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:23:09.088965    3780 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 07:23:09.105667    3780 start.go:159] libmachine.API.Create for "kubenet-176000" (driver="qemu2")
	I0610 07:23:09.105697    3780 client.go:168] LocalClient.Create starting
	I0610 07:23:09.105759    3780 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:23:09.105781    3780 main.go:141] libmachine: Decoding PEM data...
	I0610 07:23:09.105795    3780 main.go:141] libmachine: Parsing certificate...
	I0610 07:23:09.105844    3780 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:23:09.105860    3780 main.go:141] libmachine: Decoding PEM data...
	I0610 07:23:09.105869    3780 main.go:141] libmachine: Parsing certificate...
	I0610 07:23:09.106236    3780 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:23:09.229215    3780 main.go:141] libmachine: Creating SSH key...
	I0610 07:23:09.513235    3780 main.go:141] libmachine: Creating Disk image...
	I0610 07:23:09.513245    3780 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:23:09.513454    3780 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubenet-176000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubenet-176000/disk.qcow2
	I0610 07:23:09.522888    3780 main.go:141] libmachine: STDOUT: 
	I0610 07:23:09.522905    3780 main.go:141] libmachine: STDERR: 
	I0610 07:23:09.522965    3780 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubenet-176000/disk.qcow2 +20000M
	I0610 07:23:09.530239    3780 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:23:09.530258    3780 main.go:141] libmachine: STDERR: 
	I0610 07:23:09.530281    3780 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubenet-176000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubenet-176000/disk.qcow2
	I0610 07:23:09.530286    3780 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:23:09.530325    3780 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubenet-176000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubenet-176000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubenet-176000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:b2:99:8a:4a:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubenet-176000/disk.qcow2
	I0610 07:23:09.531890    3780 main.go:141] libmachine: STDOUT: 
	I0610 07:23:09.531904    3780 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:23:09.531920    3780 client.go:171] LocalClient.Create took 426.232583ms
	I0610 07:23:11.534173    3780 start.go:128] duration metric: createHost completed in 2.45369925s
	I0610 07:23:11.534243    3780 start.go:83] releasing machines lock for "kubenet-176000", held for 2.453878208s
	W0610 07:23:11.534294    3780 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:23:11.541583    3780 out.go:177] * Deleting "kubenet-176000" in qemu2 ...
	W0610 07:23:11.561628    3780 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:23:11.561657    3780 start.go:702] Will try again in 5 seconds ...
	I0610 07:23:16.563794    3780 start.go:364] acquiring machines lock for kubenet-176000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:23:16.564390    3780 start.go:368] acquired machines lock for "kubenet-176000" in 498.292µs
	I0610 07:23:16.564516    3780 start.go:93] Provisioning new machine with config: &{Name:kubenet-176000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:kubenet-176000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:23:16.564866    3780 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:23:16.574561    3780 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 07:23:16.622872    3780 start.go:159] libmachine.API.Create for "kubenet-176000" (driver="qemu2")
	I0610 07:23:16.622928    3780 client.go:168] LocalClient.Create starting
	I0610 07:23:16.623059    3780 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:23:16.623105    3780 main.go:141] libmachine: Decoding PEM data...
	I0610 07:23:16.623123    3780 main.go:141] libmachine: Parsing certificate...
	I0610 07:23:16.623204    3780 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:23:16.623232    3780 main.go:141] libmachine: Decoding PEM data...
	I0610 07:23:16.623249    3780 main.go:141] libmachine: Parsing certificate...
	I0610 07:23:16.623795    3780 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:23:16.749354    3780 main.go:141] libmachine: Creating SSH key...
	I0610 07:23:16.845407    3780 main.go:141] libmachine: Creating Disk image...
	I0610 07:23:16.845415    3780 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:23:16.845554    3780 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubenet-176000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubenet-176000/disk.qcow2
	I0610 07:23:16.854198    3780 main.go:141] libmachine: STDOUT: 
	I0610 07:23:16.854210    3780 main.go:141] libmachine: STDERR: 
	I0610 07:23:16.854263    3780 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubenet-176000/disk.qcow2 +20000M
	I0610 07:23:16.861311    3780 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:23:16.861323    3780 main.go:141] libmachine: STDERR: 
	I0610 07:23:16.861337    3780 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubenet-176000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubenet-176000/disk.qcow2
	I0610 07:23:16.861344    3780 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:23:16.861387    3780 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubenet-176000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubenet-176000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubenet-176000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:6f:6a:9c:42:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/kubenet-176000/disk.qcow2
	I0610 07:23:16.862869    3780 main.go:141] libmachine: STDOUT: 
	I0610 07:23:16.862882    3780 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:23:16.862893    3780 client.go:171] LocalClient.Create took 239.966333ms
	I0610 07:23:18.865032    3780 start.go:128] duration metric: createHost completed in 2.3002s
	I0610 07:23:18.865113    3780 start.go:83] releasing machines lock for "kubenet-176000", held for 2.300771375s
	W0610 07:23:18.865511    3780 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-176000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-176000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:23:18.876323    3780 out.go:177] 
	W0610 07:23:18.880380    3780 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:23:18.880411    3780 out.go:239] * 
	* 
	W0610 07:23:18.883138    3780 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:23:18.892195    3780 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-176000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-176000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.827146875s)

                                                
                                                
-- stdout --
	* [custom-flannel-176000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node custom-flannel-176000 in cluster custom-flannel-176000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-176000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:23:21.069739    3892 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:23:21.069877    3892 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:23:21.069881    3892 out.go:309] Setting ErrFile to fd 2...
	I0610 07:23:21.069883    3892 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:23:21.069949    3892 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:23:21.071014    3892 out.go:303] Setting JSON to false
	I0610 07:23:21.086193    3892 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1371,"bootTime":1686405630,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:23:21.086258    3892 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:23:21.091672    3892 out.go:177] * [custom-flannel-176000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:23:21.095446    3892 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:23:21.095512    3892 notify.go:220] Checking for updates...
	I0610 07:23:21.099556    3892 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:23:21.103681    3892 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:23:21.106471    3892 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:23:21.109545    3892 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:23:21.112610    3892 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:23:21.114273    3892 config.go:182] Loaded profile config "multinode-214000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:23:21.114331    3892 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:23:21.118543    3892 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 07:23:21.125416    3892 start.go:297] selected driver: qemu2
	I0610 07:23:21.125421    3892 start.go:875] validating driver "qemu2" against <nil>
	I0610 07:23:21.125428    3892 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:23:21.127185    3892 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 07:23:21.130514    3892 out.go:177] * Automatically selected the socket_vmnet network
	I0610 07:23:21.133662    3892 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 07:23:21.133682    3892 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0610 07:23:21.133702    3892 start_flags.go:314] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0610 07:23:21.133710    3892 start_flags.go:319] config:
	{Name:custom-flannel-176000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:custom-flannel-176000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/sock
et_vmnet StaticIP:}
	I0610 07:23:21.133798    3892 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:23:21.141537    3892 out.go:177] * Starting control plane node custom-flannel-176000 in cluster custom-flannel-176000
	I0610 07:23:21.145582    3892 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:23:21.145605    3892 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 07:23:21.145619    3892 cache.go:57] Caching tarball of preloaded images
	I0610 07:23:21.145677    3892 preload.go:174] Found /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 07:23:21.145683    3892 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 07:23:21.145749    3892 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/custom-flannel-176000/config.json ...
	I0610 07:23:21.145760    3892 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/custom-flannel-176000/config.json: {Name:mk5279efb1884072af523718b2275fea929367c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:23:21.145962    3892 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:23:21.145973    3892 start.go:364] acquiring machines lock for custom-flannel-176000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:23:21.146002    3892 start.go:368] acquired machines lock for "custom-flannel-176000" in 24.917µs
	I0610 07:23:21.146013    3892 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-176000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.2 ClusterName:custom-flannel-176000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:23:21.146038    3892 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:23:21.154591    3892 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 07:23:21.171062    3892 start.go:159] libmachine.API.Create for "custom-flannel-176000" (driver="qemu2")
	I0610 07:23:21.171093    3892 client.go:168] LocalClient.Create starting
	I0610 07:23:21.171154    3892 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:23:21.171177    3892 main.go:141] libmachine: Decoding PEM data...
	I0610 07:23:21.171187    3892 main.go:141] libmachine: Parsing certificate...
	I0610 07:23:21.171234    3892 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:23:21.171250    3892 main.go:141] libmachine: Decoding PEM data...
	I0610 07:23:21.171258    3892 main.go:141] libmachine: Parsing certificate...
	I0610 07:23:21.171623    3892 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:23:21.287009    3892 main.go:141] libmachine: Creating SSH key...
	I0610 07:23:21.349248    3892 main.go:141] libmachine: Creating Disk image...
	I0610 07:23:21.349254    3892 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:23:21.349390    3892 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/custom-flannel-176000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/custom-flannel-176000/disk.qcow2
	I0610 07:23:21.357859    3892 main.go:141] libmachine: STDOUT: 
	I0610 07:23:21.357877    3892 main.go:141] libmachine: STDERR: 
	I0610 07:23:21.357934    3892 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/custom-flannel-176000/disk.qcow2 +20000M
	I0610 07:23:21.365142    3892 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:23:21.365154    3892 main.go:141] libmachine: STDERR: 
	I0610 07:23:21.365173    3892 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/custom-flannel-176000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/custom-flannel-176000/disk.qcow2
	I0610 07:23:21.365180    3892 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:23:21.365217    3892 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/custom-flannel-176000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/custom-flannel-176000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/custom-flannel-176000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:c6:6c:ac:f7:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/custom-flannel-176000/disk.qcow2
	I0610 07:23:21.366693    3892 main.go:141] libmachine: STDOUT: 
	I0610 07:23:21.366705    3892 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:23:21.366721    3892 client.go:171] LocalClient.Create took 195.629ms
	I0610 07:23:23.368830    3892 start.go:128] duration metric: createHost completed in 2.222841708s
	I0610 07:23:23.368893    3892 start.go:83] releasing machines lock for "custom-flannel-176000", held for 2.22295125s
	W0610 07:23:23.369071    3892 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:23:23.380972    3892 out.go:177] * Deleting "custom-flannel-176000" in qemu2 ...
	W0610 07:23:23.400652    3892 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:23:23.400683    3892 start.go:702] Will try again in 5 seconds ...
	I0610 07:23:28.402839    3892 start.go:364] acquiring machines lock for custom-flannel-176000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:23:28.405022    3892 start.go:368] acquired machines lock for "custom-flannel-176000" in 2.085542ms
	I0610 07:23:28.405147    3892 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-176000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.27.2 ClusterName:custom-flannel-176000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:23:28.405480    3892 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:23:28.413485    3892 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 07:23:28.461346    3892 start.go:159] libmachine.API.Create for "custom-flannel-176000" (driver="qemu2")
	I0610 07:23:28.461393    3892 client.go:168] LocalClient.Create starting
	I0610 07:23:28.461522    3892 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:23:28.461573    3892 main.go:141] libmachine: Decoding PEM data...
	I0610 07:23:28.461594    3892 main.go:141] libmachine: Parsing certificate...
	I0610 07:23:28.461678    3892 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:23:28.461708    3892 main.go:141] libmachine: Decoding PEM data...
	I0610 07:23:28.461720    3892 main.go:141] libmachine: Parsing certificate...
	I0610 07:23:28.462292    3892 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:23:28.577434    3892 main.go:141] libmachine: Creating SSH key...
	I0610 07:23:28.812433    3892 main.go:141] libmachine: Creating Disk image...
	I0610 07:23:28.812442    3892 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:23:28.812602    3892 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/custom-flannel-176000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/custom-flannel-176000/disk.qcow2
	I0610 07:23:28.821237    3892 main.go:141] libmachine: STDOUT: 
	I0610 07:23:28.821252    3892 main.go:141] libmachine: STDERR: 
	I0610 07:23:28.821310    3892 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/custom-flannel-176000/disk.qcow2 +20000M
	I0610 07:23:28.828327    3892 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:23:28.828339    3892 main.go:141] libmachine: STDERR: 
	I0610 07:23:28.828353    3892 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/custom-flannel-176000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/custom-flannel-176000/disk.qcow2
	I0610 07:23:28.828359    3892 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:23:28.828404    3892 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/custom-flannel-176000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/custom-flannel-176000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/custom-flannel-176000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:98:23:32:5d:73 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/custom-flannel-176000/disk.qcow2
	I0610 07:23:28.829903    3892 main.go:141] libmachine: STDOUT: 
	I0610 07:23:28.829915    3892 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:23:28.829925    3892 client.go:171] LocalClient.Create took 368.539833ms
	I0610 07:23:30.832017    3892 start.go:128] duration metric: createHost completed in 2.426591125s
	I0610 07:23:30.832100    3892 start.go:83] releasing machines lock for "custom-flannel-176000", held for 2.427106542s
	W0610 07:23:30.832509    3892 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-176000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-176000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:23:30.841020    3892 out.go:177] 
	W0610 07:23:30.844166    3892 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:23:30.844192    3892 out.go:239] * 
	* 
	W0610 07:23:30.847001    3892 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:23:30.856110    3892 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-176000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-176000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.761319833s)

                                                
                                                
-- stdout --
	* [calico-176000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node calico-176000 in cluster calico-176000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-176000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:23:33.233883    4009 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:23:33.234009    4009 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:23:33.234012    4009 out.go:309] Setting ErrFile to fd 2...
	I0610 07:23:33.234014    4009 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:23:33.234086    4009 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:23:33.235135    4009 out.go:303] Setting JSON to false
	I0610 07:23:33.250114    4009 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1383,"bootTime":1686405630,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:23:33.250185    4009 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:23:33.253971    4009 out.go:177] * [calico-176000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:23:33.261906    4009 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:23:33.261977    4009 notify.go:220] Checking for updates...
	I0610 07:23:33.267818    4009 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:23:33.270863    4009 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:23:33.272278    4009 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:23:33.275842    4009 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:23:33.278856    4009 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:23:33.280559    4009 config.go:182] Loaded profile config "multinode-214000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:23:33.280598    4009 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:23:33.284853    4009 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 07:23:33.291706    4009 start.go:297] selected driver: qemu2
	I0610 07:23:33.291718    4009 start.go:875] validating driver "qemu2" against <nil>
	I0610 07:23:33.291725    4009 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:23:33.293532    4009 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 07:23:33.296782    4009 out.go:177] * Automatically selected the socket_vmnet network
	I0610 07:23:33.299910    4009 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 07:23:33.299933    4009 cni.go:84] Creating CNI manager for "calico"
	I0610 07:23:33.299937    4009 start_flags.go:314] Found "Calico" CNI - setting NetworkPlugin=cni
	I0610 07:23:33.299945    4009 start_flags.go:319] config:
	{Name:calico-176000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:calico-176000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:23:33.300044    4009 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:23:33.307842    4009 out.go:177] * Starting control plane node calico-176000 in cluster calico-176000
	I0610 07:23:33.311836    4009 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:23:33.311858    4009 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 07:23:33.311871    4009 cache.go:57] Caching tarball of preloaded images
	I0610 07:23:33.311953    4009 preload.go:174] Found /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 07:23:33.311958    4009 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 07:23:33.312023    4009 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/calico-176000/config.json ...
	I0610 07:23:33.312039    4009 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/calico-176000/config.json: {Name:mk6a2b359749b1c1c241b62599c09a8621c4f4bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:23:33.312231    4009 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:23:33.312243    4009 start.go:364] acquiring machines lock for calico-176000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:23:33.312272    4009 start.go:368] acquired machines lock for "calico-176000" in 24.041µs
	I0610 07:23:33.312283    4009 start.go:93] Provisioning new machine with config: &{Name:calico-176000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:calico-176000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:23:33.312309    4009 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:23:33.320863    4009 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 07:23:33.337296    4009 start.go:159] libmachine.API.Create for "calico-176000" (driver="qemu2")
	I0610 07:23:33.337310    4009 client.go:168] LocalClient.Create starting
	I0610 07:23:33.337364    4009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:23:33.337389    4009 main.go:141] libmachine: Decoding PEM data...
	I0610 07:23:33.337397    4009 main.go:141] libmachine: Parsing certificate...
	I0610 07:23:33.337430    4009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:23:33.337448    4009 main.go:141] libmachine: Decoding PEM data...
	I0610 07:23:33.337455    4009 main.go:141] libmachine: Parsing certificate...
	I0610 07:23:33.337757    4009 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:23:33.444261    4009 main.go:141] libmachine: Creating SSH key...
	I0610 07:23:33.563924    4009 main.go:141] libmachine: Creating Disk image...
	I0610 07:23:33.563932    4009 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:23:33.564077    4009 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/calico-176000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/calico-176000/disk.qcow2
	I0610 07:23:33.572814    4009 main.go:141] libmachine: STDOUT: 
	I0610 07:23:33.572826    4009 main.go:141] libmachine: STDERR: 
	I0610 07:23:33.572883    4009 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/calico-176000/disk.qcow2 +20000M
	I0610 07:23:33.579951    4009 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:23:33.579961    4009 main.go:141] libmachine: STDERR: 
	I0610 07:23:33.579980    4009 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/calico-176000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/calico-176000/disk.qcow2
	I0610 07:23:33.579987    4009 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:23:33.580032    4009 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/calico-176000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/calico-176000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/calico-176000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:f3:e1:63:b6:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/calico-176000/disk.qcow2
	I0610 07:23:33.581565    4009 main.go:141] libmachine: STDOUT: 
	I0610 07:23:33.581588    4009 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:23:33.581609    4009 client.go:171] LocalClient.Create took 244.3015ms
	I0610 07:23:35.583771    4009 start.go:128] duration metric: createHost completed in 2.2714995s
	I0610 07:23:35.583872    4009 start.go:83] releasing machines lock for "calico-176000", held for 2.27166175s
	W0610 07:23:35.583986    4009 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:23:35.595306    4009 out.go:177] * Deleting "calico-176000" in qemu2 ...
	W0610 07:23:35.614195    4009 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:23:35.614226    4009 start.go:702] Will try again in 5 seconds ...
	I0610 07:23:40.616433    4009 start.go:364] acquiring machines lock for calico-176000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:23:40.617073    4009 start.go:368] acquired machines lock for "calico-176000" in 514.166µs
	I0610 07:23:40.617187    4009 start.go:93] Provisioning new machine with config: &{Name:calico-176000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.27.2 ClusterName:calico-176000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:23:40.617669    4009 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:23:40.624818    4009 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 07:23:40.674250    4009 start.go:159] libmachine.API.Create for "calico-176000" (driver="qemu2")
	I0610 07:23:40.674303    4009 client.go:168] LocalClient.Create starting
	I0610 07:23:40.674457    4009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:23:40.674496    4009 main.go:141] libmachine: Decoding PEM data...
	I0610 07:23:40.674525    4009 main.go:141] libmachine: Parsing certificate...
	I0610 07:23:40.674605    4009 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:23:40.674633    4009 main.go:141] libmachine: Decoding PEM data...
	I0610 07:23:40.674649    4009 main.go:141] libmachine: Parsing certificate...
	I0610 07:23:40.675155    4009 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:23:40.801660    4009 main.go:141] libmachine: Creating SSH key...
	I0610 07:23:40.910597    4009 main.go:141] libmachine: Creating Disk image...
	I0610 07:23:40.910605    4009 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:23:40.910754    4009 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/calico-176000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/calico-176000/disk.qcow2
	I0610 07:23:40.919588    4009 main.go:141] libmachine: STDOUT: 
	I0610 07:23:40.919604    4009 main.go:141] libmachine: STDERR: 
	I0610 07:23:40.919673    4009 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/calico-176000/disk.qcow2 +20000M
	I0610 07:23:40.926740    4009 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:23:40.926751    4009 main.go:141] libmachine: STDERR: 
	I0610 07:23:40.926764    4009 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/calico-176000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/calico-176000/disk.qcow2
	I0610 07:23:40.926774    4009 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:23:40.926814    4009 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/calico-176000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/calico-176000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/calico-176000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:fe:55:28:85:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/calico-176000/disk.qcow2
	I0610 07:23:40.928319    4009 main.go:141] libmachine: STDOUT: 
	I0610 07:23:40.928330    4009 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:23:40.928341    4009 client.go:171] LocalClient.Create took 254.0375ms
	I0610 07:23:42.930429    4009 start.go:128] duration metric: createHost completed in 2.312792542s
	I0610 07:23:42.930491    4009 start.go:83] releasing machines lock for "calico-176000", held for 2.313464959s
	W0610 07:23:42.930923    4009 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-176000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-176000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:23:42.941528    4009 out.go:177] 
	W0610 07:23:42.945688    4009 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:23:42.945710    4009 out.go:239] * 
	* 
	W0610 07:23:42.948495    4009 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:23:42.954564    4009 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-arm64 start -p false-176000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-176000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.668039708s)

                                                
                                                
-- stdout --
	* [false-176000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node false-176000 in cluster false-176000
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-176000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:23:45.327027    4126 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:23:45.327159    4126 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:23:45.327162    4126 out.go:309] Setting ErrFile to fd 2...
	I0610 07:23:45.327165    4126 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:23:45.327238    4126 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:23:45.328285    4126 out.go:303] Setting JSON to false
	I0610 07:23:45.343267    4126 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1395,"bootTime":1686405630,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:23:45.343352    4126 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:23:45.348019    4126 out.go:177] * [false-176000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:23:45.356004    4126 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:23:45.356056    4126 notify.go:220] Checking for updates...
	I0610 07:23:45.362938    4126 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:23:45.365973    4126 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:23:45.368974    4126 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:23:45.371904    4126 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:23:45.374914    4126 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:23:45.378279    4126 config.go:182] Loaded profile config "multinode-214000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:23:45.378325    4126 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:23:45.382840    4126 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 07:23:45.389945    4126 start.go:297] selected driver: qemu2
	I0610 07:23:45.389950    4126 start.go:875] validating driver "qemu2" against <nil>
	I0610 07:23:45.389956    4126 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:23:45.391848    4126 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 07:23:45.394894    4126 out.go:177] * Automatically selected the socket_vmnet network
	I0610 07:23:45.398060    4126 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 07:23:45.398086    4126 cni.go:84] Creating CNI manager for "false"
	I0610 07:23:45.398094    4126 start_flags.go:319] config:
	{Name:false-176000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:false-176000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:23:45.398183    4126 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:23:45.405954    4126 out.go:177] * Starting control plane node false-176000 in cluster false-176000
	I0610 07:23:45.410002    4126 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:23:45.410029    4126 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 07:23:45.410038    4126 cache.go:57] Caching tarball of preloaded images
	I0610 07:23:45.410103    4126 preload.go:174] Found /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 07:23:45.410109    4126 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 07:23:45.410169    4126 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/false-176000/config.json ...
	I0610 07:23:45.410181    4126 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/false-176000/config.json: {Name:mk5094dffdb722a01e98a33321efcafb4432b998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:23:45.410392    4126 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:23:45.410404    4126 start.go:364] acquiring machines lock for false-176000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:23:45.410441    4126 start.go:368] acquired machines lock for "false-176000" in 31.375µs
	I0610 07:23:45.410453    4126 start.go:93] Provisioning new machine with config: &{Name:false-176000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.2 ClusterName:false-176000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:23:45.410480    4126 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:23:45.418928    4126 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 07:23:45.435800    4126 start.go:159] libmachine.API.Create for "false-176000" (driver="qemu2")
	I0610 07:23:45.435826    4126 client.go:168] LocalClient.Create starting
	I0610 07:23:45.435896    4126 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:23:45.435920    4126 main.go:141] libmachine: Decoding PEM data...
	I0610 07:23:45.435930    4126 main.go:141] libmachine: Parsing certificate...
	I0610 07:23:45.435981    4126 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:23:45.435999    4126 main.go:141] libmachine: Decoding PEM data...
	I0610 07:23:45.436005    4126 main.go:141] libmachine: Parsing certificate...
	I0610 07:23:45.436330    4126 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:23:45.560689    4126 main.go:141] libmachine: Creating SSH key...
	I0610 07:23:45.594422    4126 main.go:141] libmachine: Creating Disk image...
	I0610 07:23:45.594427    4126 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:23:45.594566    4126 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/false-176000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/false-176000/disk.qcow2
	I0610 07:23:45.603116    4126 main.go:141] libmachine: STDOUT: 
	I0610 07:23:45.603130    4126 main.go:141] libmachine: STDERR: 
	I0610 07:23:45.603168    4126 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/false-176000/disk.qcow2 +20000M
	I0610 07:23:45.610321    4126 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:23:45.610333    4126 main.go:141] libmachine: STDERR: 
	I0610 07:23:45.610343    4126 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/false-176000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/false-176000/disk.qcow2
	I0610 07:23:45.610349    4126 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:23:45.610375    4126 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/false-176000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/false-176000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/false-176000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:c7:6e:66:17:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/false-176000/disk.qcow2
	I0610 07:23:45.611885    4126 main.go:141] libmachine: STDOUT: 
	I0610 07:23:45.611898    4126 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:23:45.611916    4126 client.go:171] LocalClient.Create took 176.089959ms
	I0610 07:23:47.614053    4126 start.go:128] duration metric: createHost completed in 2.203625666s
	I0610 07:23:47.614098    4126 start.go:83] releasing machines lock for "false-176000", held for 2.20371825s
	W0610 07:23:47.614150    4126 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:23:47.626125    4126 out.go:177] * Deleting "false-176000" in qemu2 ...
	W0610 07:23:47.645971    4126 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:23:47.646002    4126 start.go:702] Will try again in 5 seconds ...
	I0610 07:23:52.648064    4126 start.go:364] acquiring machines lock for false-176000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:23:52.648496    4126 start.go:368] acquired machines lock for "false-176000" in 331.417µs
	I0610 07:23:52.648614    4126 start.go:93] Provisioning new machine with config: &{Name:false-176000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.2 ClusterName:false-176000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:23:52.648899    4126 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:23:52.658259    4126 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 07:23:52.706865    4126 start.go:159] libmachine.API.Create for "false-176000" (driver="qemu2")
	I0610 07:23:52.706916    4126 client.go:168] LocalClient.Create starting
	I0610 07:23:52.707050    4126 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:23:52.707106    4126 main.go:141] libmachine: Decoding PEM data...
	I0610 07:23:52.707128    4126 main.go:141] libmachine: Parsing certificate...
	I0610 07:23:52.707220    4126 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:23:52.707271    4126 main.go:141] libmachine: Decoding PEM data...
	I0610 07:23:52.707284    4126 main.go:141] libmachine: Parsing certificate...
	I0610 07:23:52.707956    4126 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:23:52.830122    4126 main.go:141] libmachine: Creating SSH key...
	I0610 07:23:52.910006    4126 main.go:141] libmachine: Creating Disk image...
	I0610 07:23:52.910013    4126 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:23:52.910161    4126 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/false-176000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/false-176000/disk.qcow2
	I0610 07:23:52.918449    4126 main.go:141] libmachine: STDOUT: 
	I0610 07:23:52.918466    4126 main.go:141] libmachine: STDERR: 
	I0610 07:23:52.918522    4126 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/false-176000/disk.qcow2 +20000M
	I0610 07:23:52.925554    4126 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:23:52.925565    4126 main.go:141] libmachine: STDERR: 
	I0610 07:23:52.925576    4126 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/false-176000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/false-176000/disk.qcow2
	I0610 07:23:52.925582    4126 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:23:52.925630    4126 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/false-176000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/false-176000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/false-176000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:bc:b4:33:41:45 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/false-176000/disk.qcow2
	I0610 07:23:52.927085    4126 main.go:141] libmachine: STDOUT: 
	I0610 07:23:52.927100    4126 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:23:52.927120    4126 client.go:171] LocalClient.Create took 220.202375ms
	I0610 07:23:54.929265    4126 start.go:128] duration metric: createHost completed in 2.280385667s
	I0610 07:23:54.929357    4126 start.go:83] releasing machines lock for "false-176000", held for 2.280907291s
	W0610 07:23:54.929854    4126 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-176000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-176000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:23:54.937295    4126 out.go:177] 
	W0610 07:23:54.942300    4126 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:23:54.942327    4126 out.go:239] * 
	* 
	W0610 07:23:54.944851    4126 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:23:54.954218    4126 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (2.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:195: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3364477537.exe start -p stopped-upgrade-447000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3364477537.exe start -p stopped-upgrade-447000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3364477537.exe: permission denied (7.315542ms)
version_upgrade_test.go:195: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3364477537.exe start -p stopped-upgrade-447000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3364477537.exe start -p stopped-upgrade-447000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3364477537.exe: permission denied (7.05225ms)
version_upgrade_test.go:195: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3364477537.exe start -p stopped-upgrade-447000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:195: (dbg) Non-zero exit: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3364477537.exe start -p stopped-upgrade-447000 --memory=2200 --vm-driver=qemu2 : fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3364477537.exe: permission denied (6.880709ms)
version_upgrade_test.go:201: legacy v1.6.2 start failed: fork/exec /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.6.2.3364477537.exe: permission denied
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (2.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-447000
version_upgrade_test.go:218: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p stopped-upgrade-447000: exit status 85 (114.8515ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p custom-flannel-176000 sudo                        | custom-flannel-176000 | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | systemctl status kubelet --all                       |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-176000                             | custom-flannel-176000 | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | sudo systemctl cat kubelet                           |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-176000 sudo                        | custom-flannel-176000 | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-176000                             | custom-flannel-176000 | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-176000                             | custom-flannel-176000 | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-176000 sudo                        | custom-flannel-176000 | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-176000                             | custom-flannel-176000 | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | sudo systemctl cat docker                            |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-176000 sudo                        | custom-flannel-176000 | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | cat /etc/docker/daemon.json                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-176000 sudo                        | custom-flannel-176000 | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | docker system info                                   |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-176000 sudo                        | custom-flannel-176000 | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-176000                             | custom-flannel-176000 | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | sudo systemctl cat cri-docker                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-176000 sudo cat                    | custom-flannel-176000 | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-176000 sudo cat                    | custom-flannel-176000 | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-176000 sudo                        | custom-flannel-176000 | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-176000 sudo                        | custom-flannel-176000 | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-176000                             | custom-flannel-176000 | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | sudo systemctl cat containerd                        |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-176000 sudo cat                    | custom-flannel-176000 | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-176000                             | custom-flannel-176000 | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | sudo cat                                             |                       |         |         |                     |                     |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-176000 sudo                        | custom-flannel-176000 | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-176000 sudo                        | custom-flannel-176000 | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-176000 sudo                        | custom-flannel-176000 | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-176000 sudo                        | custom-flannel-176000 | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | find /etc/crio -type f -exec                         |                       |         |         |                     |                     |
	|         | sh -c 'echo {}; cat {}' \;                           |                       |         |         |                     |                     |
	| ssh     | -p custom-flannel-176000 sudo                        | custom-flannel-176000 | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | crio config                                          |                       |         |         |                     |                     |
	| delete  | -p custom-flannel-176000                             | custom-flannel-176000 | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT | 10 Jun 23 07:23 PDT |
	| start   | -p calico-176000 --memory=3072                       | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | --alsologtostderr --wait=true                        |                       |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                       |         |         |                     |                     |
	|         | --cni=calico --driver=qemu2                          |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo cat                            | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | /etc/nsswitch.conf                                   |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo cat                            | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | /etc/hosts                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo cat                            | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | /etc/resolv.conf                                     |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo crictl                         | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | pods                                                 |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo crictl                         | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | ps --all                                             |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo find                           | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | /etc/cni -type f -exec sh -c                         |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo ip a s                         | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	| ssh     | -p calico-176000 sudo ip r s                         | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	| ssh     | -p calico-176000 sudo                                | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | iptables-save                                        |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo iptables                       | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | -t nat -L -n -v                                      |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo                                | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | systemctl status kubelet --all                       |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo                                | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | systemctl cat kubelet                                |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo                                | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | journalctl -xeu kubelet --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo cat                            | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo cat                            | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | /var/lib/kubelet/config.yaml                         |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo                                | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | systemctl status docker --all                        |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo                                | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | systemctl cat docker                                 |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo cat                            | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | /etc/docker/daemon.json                              |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo docker                         | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | system info                                          |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo                                | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | systemctl status cri-docker                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo                                | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | systemctl cat cri-docker                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo cat                            | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo cat                            | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo                                | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | cri-dockerd --version                                |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo                                | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | systemctl status containerd                          |                       |         |         |                     |                     |
	|         | --all --full --no-pager                              |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo                                | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | systemctl cat containerd                             |                       |         |         |                     |                     |
	|         | --no-pager                                           |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo cat                            | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo cat                            | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | /etc/containerd/config.toml                          |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo                                | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | containerd config dump                               |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo                                | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | systemctl status crio --all                          |                       |         |         |                     |                     |
	|         | --full --no-pager                                    |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo                                | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo find                           | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                       |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                       |         |         |                     |                     |
	| ssh     | -p calico-176000 sudo crio                           | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | config                                               |                       |         |         |                     |                     |
	| delete  | -p calico-176000                                     | calico-176000         | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT | 10 Jun 23 07:23 PDT |
	| start   | -p false-176000 --memory=3072                        | false-176000          | jenkins | v1.30.1 | 10 Jun 23 07:23 PDT |                     |
	|         | --alsologtostderr --wait=true                        |                       |         |         |                     |                     |
	|         | --wait-timeout=15m --cni=false                       |                       |         |         |                     |                     |
	|         | --driver=qemu2                                       |                       |         |         |                     |                     |
	|---------|------------------------------------------------------|-----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 07:23:45
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 07:23:45.327027    4126 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:23:45.327159    4126 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:23:45.327162    4126 out.go:309] Setting ErrFile to fd 2...
	I0610 07:23:45.327165    4126 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:23:45.327238    4126 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:23:45.328285    4126 out.go:303] Setting JSON to false
	I0610 07:23:45.343267    4126 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1395,"bootTime":1686405630,"procs":403,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:23:45.343352    4126 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:23:45.348019    4126 out.go:177] * [false-176000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:23:45.356004    4126 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:23:45.356056    4126 notify.go:220] Checking for updates...
	I0610 07:23:45.362938    4126 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:23:45.365973    4126 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:23:45.368974    4126 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:23:45.371904    4126 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:23:45.374914    4126 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:23:45.378279    4126 config.go:182] Loaded profile config "multinode-214000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:23:45.378325    4126 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:23:45.382840    4126 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 07:23:45.389945    4126 start.go:297] selected driver: qemu2
	I0610 07:23:45.389950    4126 start.go:875] validating driver "qemu2" against <nil>
	I0610 07:23:45.389956    4126 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:23:45.391848    4126 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 07:23:45.394894    4126 out.go:177] * Automatically selected the socket_vmnet network
	I0610 07:23:45.398060    4126 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 07:23:45.398086    4126 cni.go:84] Creating CNI manager for "false"
	I0610 07:23:45.398094    4126 start_flags.go:319] config:
	{Name:false-176000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:false-176000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRIS
ocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:23:45.398183    4126 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:23:45.405954    4126 out.go:177] * Starting control plane node false-176000 in cluster false-176000
	I0610 07:23:45.410002    4126 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:23:45.410029    4126 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 07:23:45.410038    4126 cache.go:57] Caching tarball of preloaded images
	I0610 07:23:45.410103    4126 preload.go:174] Found /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 07:23:45.410109    4126 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 07:23:45.410169    4126 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/false-176000/config.json ...
	I0610 07:23:45.410181    4126 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/false-176000/config.json: {Name:mk5094dffdb722a01e98a33321efcafb4432b998 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:23:45.410392    4126 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:23:45.410404    4126 start.go:364] acquiring machines lock for false-176000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:23:45.410441    4126 start.go:368] acquired machines lock for "false-176000" in 31.375µs
	I0610 07:23:45.410453    4126 start.go:93] Provisioning new machine with config: &{Name:false-176000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.27.2 ClusterName:false-176000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:23:45.410480    4126 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:23:45.418928    4126 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0610 07:23:45.435800    4126 start.go:159] libmachine.API.Create for "false-176000" (driver="qemu2")
	I0610 07:23:45.435826    4126 client.go:168] LocalClient.Create starting
	I0610 07:23:45.435896    4126 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:23:45.435920    4126 main.go:141] libmachine: Decoding PEM data...
	I0610 07:23:45.435930    4126 main.go:141] libmachine: Parsing certificate...
	I0610 07:23:45.435981    4126 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:23:45.435999    4126 main.go:141] libmachine: Decoding PEM data...
	I0610 07:23:45.436005    4126 main.go:141] libmachine: Parsing certificate...
	I0610 07:23:45.436330    4126 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:23:45.560689    4126 main.go:141] libmachine: Creating SSH key...
	I0610 07:23:45.594422    4126 main.go:141] libmachine: Creating Disk image...
	I0610 07:23:45.594427    4126 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:23:45.594566    4126 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/false-176000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/false-176000/disk.qcow2
	I0610 07:23:45.603116    4126 main.go:141] libmachine: STDOUT: 
	I0610 07:23:45.603130    4126 main.go:141] libmachine: STDERR: 
	I0610 07:23:45.603168    4126 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/false-176000/disk.qcow2 +20000M
	I0610 07:23:45.610321    4126 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:23:45.610333    4126 main.go:141] libmachine: STDERR: 
	I0610 07:23:45.610343    4126 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/false-176000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/false-176000/disk.qcow2
	I0610 07:23:45.610349    4126 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:23:45.610375    4126 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/false-176000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/false-176000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/false-176000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:c7:6e:66:17:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/false-176000/disk.qcow2
	I0610 07:23:45.611885    4126 main.go:141] libmachine: STDOUT: 
	I0610 07:23:45.611898    4126 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:23:45.611916    4126 client.go:171] LocalClient.Create took 176.089959ms
	I0610 07:23:47.614053    4126 start.go:128] duration metric: createHost completed in 2.203625666s
	I0610 07:23:47.614098    4126 start.go:83] releasing machines lock for "false-176000", held for 2.20371825s
	W0610 07:23:47.614150    4126 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:23:47.626125    4126 out.go:177] * Deleting "false-176000" in qemu2 ...
	
	* 
	* Profile "stopped-upgrade-447000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p stopped-upgrade-447000"

                                                
                                                
-- /stdout --
version_upgrade_test.go:220: `minikube logs` after upgrade to HEAD from v1.6.2 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (11.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-485000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-485000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (11.827264042s)

                                                
                                                
-- stdout --
	* [old-k8s-version-485000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node old-k8s-version-485000 in cluster old-k8s-version-485000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-485000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:23:49.883728    4154 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:23:49.883861    4154 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:23:49.883864    4154 out.go:309] Setting ErrFile to fd 2...
	I0610 07:23:49.883867    4154 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:23:49.883932    4154 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:23:49.884971    4154 out.go:303] Setting JSON to false
	I0610 07:23:49.900100    4154 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1399,"bootTime":1686405630,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:23:49.900166    4154 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:23:49.904666    4154 out.go:177] * [old-k8s-version-485000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:23:49.911633    4154 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:23:49.911720    4154 notify.go:220] Checking for updates...
	I0610 07:23:49.915617    4154 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:23:49.918543    4154 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:23:49.921599    4154 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:23:49.924647    4154 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:23:49.927583    4154 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:23:49.930938    4154 config.go:182] Loaded profile config "false-176000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:23:49.931004    4154 config.go:182] Loaded profile config "multinode-214000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:23:49.931048    4154 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:23:49.935584    4154 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 07:23:49.942586    4154 start.go:297] selected driver: qemu2
	I0610 07:23:49.942591    4154 start.go:875] validating driver "qemu2" against <nil>
	I0610 07:23:49.942597    4154 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:23:49.944363    4154 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 07:23:49.947581    4154 out.go:177] * Automatically selected the socket_vmnet network
	I0610 07:23:49.949084    4154 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 07:23:49.949105    4154 cni.go:84] Creating CNI manager for ""
	I0610 07:23:49.949114    4154 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0610 07:23:49.949118    4154 start_flags.go:319] config:
	{Name:old-k8s-version-485000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-485000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:23:49.949206    4154 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:23:49.956575    4154 out.go:177] * Starting control plane node old-k8s-version-485000 in cluster old-k8s-version-485000
	I0610 07:23:49.960625    4154 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0610 07:23:49.960646    4154 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0610 07:23:49.960657    4154 cache.go:57] Caching tarball of preloaded images
	I0610 07:23:49.960725    4154 preload.go:174] Found /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 07:23:49.960734    4154 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0610 07:23:49.960792    4154 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/old-k8s-version-485000/config.json ...
	I0610 07:23:49.960804    4154 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/old-k8s-version-485000/config.json: {Name:mk77f5b39be8747bf593a79323db9cbc306f5c20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:23:49.961001    4154 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:23:49.961014    4154 start.go:364] acquiring machines lock for old-k8s-version-485000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:23:49.961041    4154 start.go:368] acquired machines lock for "old-k8s-version-485000" in 22.792µs
	I0610 07:23:49.961051    4154 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-485000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-485000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:23:49.961078    4154 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:23:49.969556    4154 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 07:23:49.985609    4154 start.go:159] libmachine.API.Create for "old-k8s-version-485000" (driver="qemu2")
	I0610 07:23:49.985641    4154 client.go:168] LocalClient.Create starting
	I0610 07:23:49.985695    4154 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:23:49.985719    4154 main.go:141] libmachine: Decoding PEM data...
	I0610 07:23:49.985727    4154 main.go:141] libmachine: Parsing certificate...
	I0610 07:23:49.985773    4154 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:23:49.985788    4154 main.go:141] libmachine: Decoding PEM data...
	I0610 07:23:49.985793    4154 main.go:141] libmachine: Parsing certificate...
	I0610 07:23:49.986110    4154 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:23:50.121730    4154 main.go:141] libmachine: Creating SSH key...
	I0610 07:23:50.251418    4154 main.go:141] libmachine: Creating Disk image...
	I0610 07:23:50.251429    4154 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:23:50.251584    4154 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/old-k8s-version-485000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/old-k8s-version-485000/disk.qcow2
	I0610 07:23:50.260323    4154 main.go:141] libmachine: STDOUT: 
	I0610 07:23:50.260337    4154 main.go:141] libmachine: STDERR: 
	I0610 07:23:50.260389    4154 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/old-k8s-version-485000/disk.qcow2 +20000M
	I0610 07:23:50.267496    4154 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:23:50.267508    4154 main.go:141] libmachine: STDERR: 
	I0610 07:23:50.267526    4154 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/old-k8s-version-485000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/old-k8s-version-485000/disk.qcow2
	I0610 07:23:50.267534    4154 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:23:50.267579    4154 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/old-k8s-version-485000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/old-k8s-version-485000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/old-k8s-version-485000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:bb:1b:62:e1:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/old-k8s-version-485000/disk.qcow2
	I0610 07:23:50.269076    4154 main.go:141] libmachine: STDOUT: 
	I0610 07:23:50.269090    4154 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:23:50.269106    4154 client.go:171] LocalClient.Create took 283.46925ms
	I0610 07:23:52.271197    4154 start.go:128] duration metric: createHost completed in 2.310177041s
	I0610 07:23:52.271257    4154 start.go:83] releasing machines lock for "old-k8s-version-485000", held for 2.310280917s
	W0610 07:23:52.271324    4154 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:23:52.282501    4154 out.go:177] * Deleting "old-k8s-version-485000" in qemu2 ...
	W0610 07:23:52.302765    4154 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:23:52.302791    4154 start.go:702] Will try again in 5 seconds ...
	I0610 07:23:57.304733    4154 start.go:364] acquiring machines lock for old-k8s-version-485000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:23:59.398067    4154 start.go:368] acquired machines lock for "old-k8s-version-485000" in 2.093318333s
	I0610 07:23:59.398177    4154 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-485000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-485000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:23:59.398431    4154 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:23:59.406433    4154 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 07:23:59.453054    4154 start.go:159] libmachine.API.Create for "old-k8s-version-485000" (driver="qemu2")
	I0610 07:23:59.453091    4154 client.go:168] LocalClient.Create starting
	I0610 07:23:59.453206    4154 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:23:59.453247    4154 main.go:141] libmachine: Decoding PEM data...
	I0610 07:23:59.453264    4154 main.go:141] libmachine: Parsing certificate...
	I0610 07:23:59.453345    4154 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:23:59.453375    4154 main.go:141] libmachine: Decoding PEM data...
	I0610 07:23:59.453390    4154 main.go:141] libmachine: Parsing certificate...
	I0610 07:23:59.453866    4154 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:23:59.579038    4154 main.go:141] libmachine: Creating SSH key...
	I0610 07:23:59.623392    4154 main.go:141] libmachine: Creating Disk image...
	I0610 07:23:59.623398    4154 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:23:59.623564    4154 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/old-k8s-version-485000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/old-k8s-version-485000/disk.qcow2
	I0610 07:23:59.632181    4154 main.go:141] libmachine: STDOUT: 
	I0610 07:23:59.632201    4154 main.go:141] libmachine: STDERR: 
	I0610 07:23:59.632259    4154 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/old-k8s-version-485000/disk.qcow2 +20000M
	I0610 07:23:59.639668    4154 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:23:59.639684    4154 main.go:141] libmachine: STDERR: 
	I0610 07:23:59.639713    4154 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/old-k8s-version-485000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/old-k8s-version-485000/disk.qcow2
	I0610 07:23:59.639720    4154 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:23:59.639759    4154 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/old-k8s-version-485000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/old-k8s-version-485000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/old-k8s-version-485000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:ea:af:01:44:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/old-k8s-version-485000/disk.qcow2
	I0610 07:23:59.641296    4154 main.go:141] libmachine: STDOUT: 
	I0610 07:23:59.641313    4154 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:23:59.641325    4154 client.go:171] LocalClient.Create took 188.231125ms
	I0610 07:24:01.643465    4154 start.go:128] duration metric: createHost completed in 2.245035333s
	I0610 07:24:01.643531    4154 start.go:83] releasing machines lock for "old-k8s-version-485000", held for 2.245504125s
	W0610 07:24:01.643900    4154 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-485000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-485000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:24:01.653602    4154 out.go:177] 
	W0610 07:24:01.656812    4154 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:24:01.656883    4154 out.go:239] * 
	* 
	W0610 07:24:01.659724    4154 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:24:01.667568    4154 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-485000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-485000 -n old-k8s-version-485000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-485000 -n old-k8s-version-485000: exit status 7 (66.530917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-485000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (11.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-457000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-457000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (9.80766975s)

                                                
                                                
-- stdout --
	* [no-preload-457000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node no-preload-457000 in cluster no-preload-457000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-457000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:23:57.080984    4263 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:23:57.081124    4263 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:23:57.081126    4263 out.go:309] Setting ErrFile to fd 2...
	I0610 07:23:57.081129    4263 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:23:57.081211    4263 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:23:57.082275    4263 out.go:303] Setting JSON to false
	I0610 07:23:57.097565    4263 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1407,"bootTime":1686405630,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:23:57.097657    4263 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:23:57.102680    4263 out.go:177] * [no-preload-457000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:23:57.110652    4263 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:23:57.114609    4263 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:23:57.110700    4263 notify.go:220] Checking for updates...
	I0610 07:23:57.118563    4263 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:23:57.121634    4263 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:23:57.124643    4263 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:23:57.127576    4263 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:23:57.130930    4263 config.go:182] Loaded profile config "multinode-214000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:23:57.131006    4263 config.go:182] Loaded profile config "old-k8s-version-485000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0610 07:23:57.131049    4263 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:23:57.135618    4263 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 07:23:57.142586    4263 start.go:297] selected driver: qemu2
	I0610 07:23:57.142591    4263 start.go:875] validating driver "qemu2" against <nil>
	I0610 07:23:57.142597    4263 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:23:57.144585    4263 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 07:23:57.147626    4263 out.go:177] * Automatically selected the socket_vmnet network
	I0610 07:23:57.150648    4263 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 07:23:57.150679    4263 cni.go:84] Creating CNI manager for ""
	I0610 07:23:57.150687    4263 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 07:23:57.150691    4263 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 07:23:57.150697    4263 start_flags.go:319] config:
	{Name:no-preload-457000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:no-preload-457000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:23:57.150787    4263 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:23:57.154664    4263 out.go:177] * Starting control plane node no-preload-457000 in cluster no-preload-457000
	I0610 07:23:57.161591    4263 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:23:57.161677    4263 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/no-preload-457000/config.json ...
	I0610 07:23:57.161696    4263 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/no-preload-457000/config.json: {Name:mk38ff05b3366aa9445a59395f13d0656d79df82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:23:57.161693    4263 cache.go:107] acquiring lock: {Name:mkaf236e56782bba9b6c8c54257bd71547baa2ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:23:57.161699    4263 cache.go:107] acquiring lock: {Name:mk6db799a9e0dc60db257d9ebb6eeda96c3b6c1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:23:57.161727    4263 cache.go:107] acquiring lock: {Name:mk601f80f0a186d02e61be83c126971ebdaff969 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:23:57.161773    4263 cache.go:115] /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0610 07:23:57.161779    4263 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 93.291µs
	I0610 07:23:57.161784    4263 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0610 07:23:57.161792    4263 cache.go:107] acquiring lock: {Name:mk498e0debe5609d9849af53d61564cd77e8872b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:23:57.161911    4263 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.27.2
	I0610 07:23:57.161927    4263 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:23:57.161927    4263 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.27.2
	I0610 07:23:57.161941    4263 start.go:364] acquiring machines lock for no-preload-457000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:23:57.161953    4263 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0610 07:23:57.161970    4263 start.go:368] acquired machines lock for "no-preload-457000" in 23.916µs
	I0610 07:23:57.161954    4263 cache.go:107] acquiring lock: {Name:mk35d1f568b1b2c690260d22d2402f53007f55c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:23:57.161966    4263 cache.go:107] acquiring lock: {Name:mk6f8fa193d9c9ea0251c2b13272bc58ca83e7d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:23:57.161981    4263 start.go:93] Provisioning new machine with config: &{Name:no-preload-457000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:no-preload-457000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:23:57.162019    4263 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:23:57.165584    4263 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 07:23:57.162067    4263 cache.go:107] acquiring lock: {Name:mk02e4ef44b11c8e5be206accce91ed7fd77bdc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:23:57.162105    4263 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.27.2
	I0610 07:23:57.162107    4263 cache.go:107] acquiring lock: {Name:mk14207edd5deb6dcd9a74da0bfdcc7ef5b9416c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:23:57.162410    4263 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.27.2
	I0610 07:23:57.166270    4263 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0610 07:23:57.166313    4263 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0610 07:23:57.170221    4263 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.27.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.27.2
	I0610 07:23:57.173346    4263 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.27.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.27.2
	I0610 07:23:57.173391    4263 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.27.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.27.2
	I0610 07:23:57.175117    4263 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0610 07:23:57.177502    4263 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0610 07:23:57.177500    4263 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0610 07:23:57.177537    4263 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.27.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.27.2
	I0610 07:23:57.181866    4263 start.go:159] libmachine.API.Create for "no-preload-457000" (driver="qemu2")
	I0610 07:23:57.181884    4263 client.go:168] LocalClient.Create starting
	I0610 07:23:57.181941    4263 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:23:57.181960    4263 main.go:141] libmachine: Decoding PEM data...
	I0610 07:23:57.181989    4263 main.go:141] libmachine: Parsing certificate...
	I0610 07:23:57.182042    4263 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:23:57.182058    4263 main.go:141] libmachine: Decoding PEM data...
	I0610 07:23:57.182069    4263 main.go:141] libmachine: Parsing certificate...
	I0610 07:23:57.182425    4263 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:23:57.309108    4263 main.go:141] libmachine: Creating SSH key...
	I0610 07:23:57.378021    4263 main.go:141] libmachine: Creating Disk image...
	I0610 07:23:57.378031    4263 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:23:57.378197    4263 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/no-preload-457000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/no-preload-457000/disk.qcow2
	I0610 07:23:57.387319    4263 main.go:141] libmachine: STDOUT: 
	I0610 07:23:57.387342    4263 main.go:141] libmachine: STDERR: 
	I0610 07:23:57.387411    4263 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/no-preload-457000/disk.qcow2 +20000M
	I0610 07:23:57.395126    4263 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:23:57.395165    4263 main.go:141] libmachine: STDERR: 
	I0610 07:23:57.395198    4263 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/no-preload-457000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/no-preload-457000/disk.qcow2
	I0610 07:23:57.395207    4263 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:23:57.395247    4263 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/no-preload-457000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/no-preload-457000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/no-preload-457000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:1f:ca:e4:6d:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/no-preload-457000/disk.qcow2
	I0610 07:23:57.397468    4263 main.go:141] libmachine: STDOUT: 
	I0610 07:23:57.397486    4263 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:23:57.397506    4263 client.go:171] LocalClient.Create took 215.624667ms
	I0610 07:23:58.606132    4263 cache.go:162] opening:  /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2
	I0610 07:23:58.639192    4263 cache.go:162] opening:  /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2
	I0610 07:23:58.652487    4263 cache.go:162] opening:  /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2
	I0610 07:23:58.692557    4263 cache.go:162] opening:  /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0610 07:23:58.744442    4263 cache.go:162] opening:  /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1
	I0610 07:23:58.812588    4263 cache.go:157] /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0610 07:23:58.812605    4263 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 1.650863584s
	I0610 07:23:58.812620    4263 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0610 07:23:58.972627    4263 cache.go:162] opening:  /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0
	I0610 07:23:59.038275    4263 cache.go:162] opening:  /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2
	I0610 07:23:59.397886    4263 start.go:128] duration metric: createHost completed in 2.235921s
	I0610 07:23:59.397920    4263 start.go:83] releasing machines lock for "no-preload-457000", held for 2.23601325s
	W0610 07:23:59.397972    4263 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:23:59.420465    4263 out.go:177] * Deleting "no-preload-457000" in qemu2 ...
	W0610 07:23:59.434807    4263 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:23:59.434824    4263 start.go:702] Will try again in 5 seconds ...
	I0610 07:24:01.349895    4263 cache.go:157] /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0610 07:24:01.349942    4263 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 4.188023s
	I0610 07:24:01.349967    4263 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0610 07:24:01.363268    4263 cache.go:157] /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2 exists
	I0610 07:24:01.363325    4263 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.27.2" -> "/Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2" took 4.201547792s
	I0610 07:24:01.363353    4263 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.27.2 -> /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2 succeeded
	I0610 07:24:01.840920    4263 cache.go:157] /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2 exists
	I0610 07:24:01.840929    4263 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.27.2" -> "/Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2" took 4.679370708s
	I0610 07:24:01.840935    4263 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.27.2 -> /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2 succeeded
	I0610 07:24:02.385829    4263 cache.go:157] /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2 exists
	I0610 07:24:02.385850    4263 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.27.2" -> "/Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2" took 5.224136333s
	I0610 07:24:02.385867    4263 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.27.2 -> /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2 succeeded
	I0610 07:24:03.191863    4263 cache.go:157] /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2 exists
	I0610 07:24:03.191907    4263 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.27.2" -> "/Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2" took 6.03040625s
	I0610 07:24:03.191935    4263 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.27.2 -> /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2 succeeded
	I0610 07:24:04.434935    4263 start.go:364] acquiring machines lock for no-preload-457000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:24:04.435242    4263 start.go:368] acquired machines lock for "no-preload-457000" in 248.834µs
	I0610 07:24:04.435352    4263 start.go:93] Provisioning new machine with config: &{Name:no-preload-457000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:no-preload-457000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:24:04.435665    4263 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:24:04.441391    4263 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 07:24:04.489345    4263 start.go:159] libmachine.API.Create for "no-preload-457000" (driver="qemu2")
	I0610 07:24:04.489408    4263 client.go:168] LocalClient.Create starting
	I0610 07:24:04.489502    4263 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:24:04.489557    4263 main.go:141] libmachine: Decoding PEM data...
	I0610 07:24:04.489588    4263 main.go:141] libmachine: Parsing certificate...
	I0610 07:24:04.489659    4263 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:24:04.489690    4263 main.go:141] libmachine: Decoding PEM data...
	I0610 07:24:04.489701    4263 main.go:141] libmachine: Parsing certificate...
	I0610 07:24:04.490171    4263 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:24:04.622575    4263 main.go:141] libmachine: Creating SSH key...
	I0610 07:24:04.802334    4263 main.go:141] libmachine: Creating Disk image...
	I0610 07:24:04.802344    4263 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:24:04.802494    4263 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/no-preload-457000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/no-preload-457000/disk.qcow2
	I0610 07:24:04.811210    4263 main.go:141] libmachine: STDOUT: 
	I0610 07:24:04.811223    4263 main.go:141] libmachine: STDERR: 
	I0610 07:24:04.811263    4263 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/no-preload-457000/disk.qcow2 +20000M
	I0610 07:24:04.818455    4263 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:24:04.818479    4263 main.go:141] libmachine: STDERR: 
	I0610 07:24:04.818494    4263 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/no-preload-457000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/no-preload-457000/disk.qcow2
	I0610 07:24:04.818499    4263 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:24:04.818537    4263 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/no-preload-457000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/no-preload-457000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/no-preload-457000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:3b:c3:88:6b:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/no-preload-457000/disk.qcow2
	I0610 07:24:04.820080    4263 main.go:141] libmachine: STDOUT: 
	I0610 07:24:04.820095    4263 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:24:04.820107    4263 client.go:171] LocalClient.Create took 330.706042ms
	I0610 07:24:06.820285    4263 start.go:128] duration metric: createHost completed in 2.384667166s
	I0610 07:24:06.820345    4263 start.go:83] releasing machines lock for "no-preload-457000", held for 2.385156166s
	W0610 07:24:06.820724    4263 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-457000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-457000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:24:06.830401    4263 out.go:177] 
	W0610 07:24:06.834356    4263 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:24:06.834419    4263 out.go:239] * 
	* 
	W0610 07:24:06.837112    4263 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:24:06.846370    4263 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-457000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-457000 -n no-preload-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-457000 -n no-preload-457000: exit status 7 (64.199958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-485000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-485000 create -f testdata/busybox.yaml: exit status 1 (30.476792ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-485000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-485000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-485000 -n old-k8s-version-485000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-485000 -n old-k8s-version-485000: exit status 7 (28.618416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-485000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-485000 -n old-k8s-version-485000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-485000 -n old-k8s-version-485000: exit status 7 (27.963375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-485000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-485000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-485000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-485000 describe deploy/metrics-server -n kube-system: exit status 1 (26.715417ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-485000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-485000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-485000 -n old-k8s-version-485000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-485000 -n old-k8s-version-485000: exit status 7 (28.003208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-485000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-485000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-485000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0: exit status 80 (5.196768791s)

                                                
                                                
-- stdout --
	* [old-k8s-version-485000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	* Using the qemu2 driver based on existing profile
	* Starting control plane node old-k8s-version-485000 in cluster old-k8s-version-485000
	* Restarting existing qemu2 VM for "old-k8s-version-485000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-485000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:24:02.131673    4392 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:24:02.131780    4392 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:24:02.131783    4392 out.go:309] Setting ErrFile to fd 2...
	I0610 07:24:02.131785    4392 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:24:02.131856    4392 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:24:02.132851    4392 out.go:303] Setting JSON to false
	I0610 07:24:02.148135    4392 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1412,"bootTime":1686405630,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:24:02.148211    4392 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:24:02.152733    4392 out.go:177] * [old-k8s-version-485000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:24:02.162736    4392 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:24:02.159821    4392 notify.go:220] Checking for updates...
	I0610 07:24:02.169689    4392 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:24:02.176796    4392 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:24:02.180769    4392 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:24:02.188814    4392 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:24:02.192778    4392 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:24:02.197062    4392 config.go:182] Loaded profile config "old-k8s-version-485000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0610 07:24:02.201757    4392 out.go:177] * Kubernetes 1.27.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.27.2
	I0610 07:24:02.205772    4392 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:24:02.209785    4392 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 07:24:02.216763    4392 start.go:297] selected driver: qemu2
	I0610 07:24:02.216768    4392 start.go:875] validating driver "qemu2" against &{Name:old-k8s-version-485000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-485000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequ
ested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:24:02.216839    4392 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:24:02.218919    4392 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 07:24:02.218946    4392 cni.go:84] Creating CNI manager for ""
	I0610 07:24:02.218952    4392 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0610 07:24:02.218957    4392 start_flags.go:319] config:
	{Name:old-k8s-version-485000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-485000 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Use
rs:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:24:02.219042    4392 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:24:02.227793    4392 out.go:177] * Starting control plane node old-k8s-version-485000 in cluster old-k8s-version-485000
	I0610 07:24:02.230802    4392 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0610 07:24:02.230841    4392 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0610 07:24:02.230856    4392 cache.go:57] Caching tarball of preloaded images
	I0610 07:24:02.230921    4392 preload.go:174] Found /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 07:24:02.230929    4392 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0610 07:24:02.230996    4392 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/old-k8s-version-485000/config.json ...
	I0610 07:24:02.231303    4392 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:24:02.231314    4392 start.go:364] acquiring machines lock for old-k8s-version-485000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:24:02.231347    4392 start.go:368] acquired machines lock for "old-k8s-version-485000" in 26.584µs
	I0610 07:24:02.231358    4392 start.go:96] Skipping create...Using existing machine configuration
	I0610 07:24:02.231363    4392 fix.go:55] fixHost starting: 
	I0610 07:24:02.231485    4392 fix.go:103] recreateIfNeeded on old-k8s-version-485000: state=Stopped err=<nil>
	W0610 07:24:02.231496    4392 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 07:24:02.239780    4392 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-485000" ...
	I0610 07:24:02.243722    4392 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/old-k8s-version-485000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/old-k8s-version-485000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/old-k8s-version-485000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:ea:af:01:44:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/old-k8s-version-485000/disk.qcow2
	I0610 07:24:02.245675    4392 main.go:141] libmachine: STDOUT: 
	I0610 07:24:02.245691    4392 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:24:02.245720    4392 fix.go:57] fixHost completed within 14.356167ms
	I0610 07:24:02.245726    4392 start.go:83] releasing machines lock for "old-k8s-version-485000", held for 14.37525ms
	W0610 07:24:02.245732    4392 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:24:02.245764    4392 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:24:02.245768    4392 start.go:702] Will try again in 5 seconds ...
	I0610 07:24:07.247630    4392 start.go:364] acquiring machines lock for old-k8s-version-485000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:24:07.247683    4392 start.go:368] acquired machines lock for "old-k8s-version-485000" in 41.75µs
	I0610 07:24:07.247696    4392 start.go:96] Skipping create...Using existing machine configuration
	I0610 07:24:07.247699    4392 fix.go:55] fixHost starting: 
	I0610 07:24:07.247820    4392 fix.go:103] recreateIfNeeded on old-k8s-version-485000: state=Stopped err=<nil>
	W0610 07:24:07.247825    4392 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 07:24:07.254562    4392 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-485000" ...
	I0610 07:24:07.261397    4392 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/old-k8s-version-485000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/old-k8s-version-485000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/old-k8s-version-485000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:ea:af:01:44:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/old-k8s-version-485000/disk.qcow2
	I0610 07:24:07.263300    4392 main.go:141] libmachine: STDOUT: 
	I0610 07:24:07.263313    4392 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:24:07.263330    4392 fix.go:57] fixHost completed within 15.630959ms
	I0610 07:24:07.263333    4392 start.go:83] releasing machines lock for "old-k8s-version-485000", held for 15.646541ms
	W0610 07:24:07.263369    4392 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-485000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-485000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:24:07.272231    4392 out.go:177] 
	W0610 07:24:07.279353    4392 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:24:07.279373    4392 out.go:239] * 
	* 
	W0610 07:24:07.279897    4392 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:24:07.294349    4392 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-485000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-485000 -n old-k8s-version-485000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-485000 -n old-k8s-version-485000: exit status 7 (30.618916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-485000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-457000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-457000 create -f testdata/busybox.yaml: exit status 1 (30.970167ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-457000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-457000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-457000 -n no-preload-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-457000 -n no-preload-457000: exit status 7 (28.491375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-457000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-457000 -n no-preload-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-457000 -n no-preload-457000: exit status 7 (28.243625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-457000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-457000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-457000 describe deploy/metrics-server -n kube-system: exit status 1 (27.166125ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-457000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-457000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-457000 -n no-preload-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-457000 -n no-preload-457000: exit status 7 (27.678375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-457000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-457000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (5.177029167s)

                                                
                                                
-- stdout --
	* [no-preload-457000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node no-preload-457000 in cluster no-preload-457000
	* Restarting existing qemu2 VM for "no-preload-457000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-457000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:24:07.336644    4422 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:24:07.336741    4422 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:24:07.336744    4422 out.go:309] Setting ErrFile to fd 2...
	I0610 07:24:07.336747    4422 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:24:07.336812    4422 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:24:07.337816    4422 out.go:303] Setting JSON to false
	I0610 07:24:07.354316    4422 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1417,"bootTime":1686405630,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:24:07.354382    4422 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:24:07.358194    4422 out.go:177] * [no-preload-457000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:24:07.367389    4422 notify.go:220] Checking for updates...
	I0610 07:24:07.368633    4422 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:24:07.372268    4422 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:24:07.375299    4422 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:24:07.376434    4422 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:24:07.379309    4422 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:24:07.382356    4422 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:24:07.385564    4422 config.go:182] Loaded profile config "no-preload-457000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:24:07.385814    4422 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:24:07.390286    4422 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 07:24:07.397341    4422 start.go:297] selected driver: qemu2
	I0610 07:24:07.397355    4422 start.go:875] validating driver "qemu2" against &{Name:no-preload-457000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:no-preload-457000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:
false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:24:07.397412    4422 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:24:07.399262    4422 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 07:24:07.399286    4422 cni.go:84] Creating CNI manager for ""
	I0610 07:24:07.399292    4422 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 07:24:07.399297    4422 start_flags.go:319] config:
	{Name:no-preload-457000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:no-preload-457000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/mi
nikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:24:07.399365    4422 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:24:07.407340    4422 out.go:177] * Starting control plane node no-preload-457000 in cluster no-preload-457000
	I0610 07:24:07.411248    4422 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:24:07.411339    4422 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/no-preload-457000/config.json ...
	I0610 07:24:07.411361    4422 cache.go:107] acquiring lock: {Name:mk6db799a9e0dc60db257d9ebb6eeda96c3b6c1a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:24:07.411370    4422 cache.go:107] acquiring lock: {Name:mkaf236e56782bba9b6c8c54257bd71547baa2ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:24:07.411412    4422 cache.go:107] acquiring lock: {Name:mk601f80f0a186d02e61be83c126971ebdaff969 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:24:07.411448    4422 cache.go:115] /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2 exists
	I0610 07:24:07.411454    4422 cache.go:115] /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0610 07:24:07.411459    4422 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 90.333µs
	I0610 07:24:07.411465    4422 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0610 07:24:07.411470    4422 cache.go:107] acquiring lock: {Name:mk6f8fa193d9c9ea0251c2b13272bc58ca83e7d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:24:07.411484    4422 cache.go:115] /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2 exists
	I0610 07:24:07.411493    4422 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.27.2" -> "/Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2" took 103.209µs
	I0610 07:24:07.411497    4422 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.27.2 -> /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.27.2 succeeded
	I0610 07:24:07.411491    4422 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.27.2" -> "/Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2" took 112.167µs
	I0610 07:24:07.411511    4422 cache.go:115] /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2 exists
	I0610 07:24:07.411513    4422 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.27.2 -> /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.27.2 succeeded
	I0610 07:24:07.411515    4422 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.27.2" -> "/Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2" took 45.958µs
	I0610 07:24:07.411536    4422 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.27.2 -> /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.27.2 succeeded
	I0610 07:24:07.411507    4422 cache.go:107] acquiring lock: {Name:mk35d1f568b1b2c690260d22d2402f53007f55c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:24:07.411520    4422 cache.go:107] acquiring lock: {Name:mk498e0debe5609d9849af53d61564cd77e8872b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:24:07.411547    4422 cache.go:107] acquiring lock: {Name:mk02e4ef44b11c8e5be206accce91ed7fd77bdc0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:24:07.411583    4422 cache.go:115] /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0610 07:24:07.411586    4422 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 80.667µs
	I0610 07:24:07.411589    4422 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0610 07:24:07.411835    4422 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.7-0
	I0610 07:24:07.411925    4422 cache.go:107] acquiring lock: {Name:mk14207edd5deb6dcd9a74da0bfdcc7ef5b9416c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:24:07.411931    4422 cache.go:115] /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2 exists
	I0610 07:24:07.411951    4422 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.27.2" -> "/Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2" took 512.5µs
	I0610 07:24:07.411961    4422 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.27.2 -> /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.27.2 succeeded
	I0610 07:24:07.411971    4422 cache.go:115] /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I0610 07:24:07.411976    4422 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1" took 393.959µs
	I0610 07:24:07.411980    4422 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I0610 07:24:07.412242    4422 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:24:07.412271    4422 start.go:364] acquiring machines lock for no-preload-457000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:24:07.412345    4422 start.go:368] acquired machines lock for "no-preload-457000" in 61.458µs
	I0610 07:24:07.412378    4422 start.go:96] Skipping create...Using existing machine configuration
	I0610 07:24:07.412386    4422 fix.go:55] fixHost starting: 
	I0610 07:24:07.412822    4422 fix.go:103] recreateIfNeeded on no-preload-457000: state=Stopped err=<nil>
	W0610 07:24:07.412833    4422 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 07:24:07.420233    4422 out.go:177] * Restarting existing qemu2 VM for "no-preload-457000" ...
	I0610 07:24:07.423346    4422 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/no-preload-457000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/no-preload-457000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/no-preload-457000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:3b:c3:88:6b:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/no-preload-457000/disk.qcow2
	I0610 07:24:07.423928    4422 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.7-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.7-0
	I0610 07:24:07.425772    4422 main.go:141] libmachine: STDOUT: 
	I0610 07:24:07.425792    4422 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:24:07.425822    4422 fix.go:57] fixHost completed within 13.436ms
	I0610 07:24:07.425825    4422 start.go:83] releasing machines lock for "no-preload-457000", held for 13.467292ms
	W0610 07:24:07.425833    4422 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:24:07.425881    4422 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:24:07.425885    4422 start.go:702] Will try again in 5 seconds ...
	I0610 07:24:08.417164    4422 cache.go:162] opening:  /Users/jenkins/minikube-integration/15074-894/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.7-0
	I0610 07:24:12.426324    4422 start.go:364] acquiring machines lock for no-preload-457000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:24:12.426751    4422 start.go:368] acquired machines lock for "no-preload-457000" in 354.083µs
	I0610 07:24:12.426894    4422 start.go:96] Skipping create...Using existing machine configuration
	I0610 07:24:12.426913    4422 fix.go:55] fixHost starting: 
	I0610 07:24:12.427568    4422 fix.go:103] recreateIfNeeded on no-preload-457000: state=Stopped err=<nil>
	W0610 07:24:12.427593    4422 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 07:24:12.435069    4422 out.go:177] * Restarting existing qemu2 VM for "no-preload-457000" ...
	I0610 07:24:12.439285    4422 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/no-preload-457000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/no-preload-457000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/no-preload-457000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:3b:c3:88:6b:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/no-preload-457000/disk.qcow2
	I0610 07:24:12.448834    4422 main.go:141] libmachine: STDOUT: 
	I0610 07:24:12.448894    4422 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:24:12.448976    4422 fix.go:57] fixHost completed within 22.067541ms
	I0610 07:24:12.448994    4422 start.go:83] releasing machines lock for "no-preload-457000", held for 22.221584ms
	W0610 07:24:12.449212    4422 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-457000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-457000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:24:12.456021    4422 out.go:177] 
	W0610 07:24:12.460208    4422 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:24:12.460240    4422 out.go:239] * 
	* 
	W0610 07:24:12.462815    4422 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:24:12.472143    4422 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-457000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-457000 -n no-preload-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-457000 -n no-preload-457000: exit status 7 (64.89825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-485000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-485000 -n old-k8s-version-485000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-485000 -n old-k8s-version-485000: exit status 7 (33.762375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-485000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-485000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-485000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-485000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.160458ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-485000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-485000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-485000 -n old-k8s-version-485000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-485000 -n old-k8s-version-485000: exit status 7 (29.2965ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-485000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p old-k8s-version-485000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p old-k8s-version-485000 "sudo crictl images -o json": exit status 89 (38.388583ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-485000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p old-k8s-version-485000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p old-k8s-version-485000"
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-485000 -n old-k8s-version-485000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-485000 -n old-k8s-version-485000: exit status 7 (28.062667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-485000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-485000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-485000 --alsologtostderr -v=1: exit status 89 (39.359583ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p old-k8s-version-485000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:24:07.517477    4443 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:24:07.517770    4443 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:24:07.517773    4443 out.go:309] Setting ErrFile to fd 2...
	I0610 07:24:07.517776    4443 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:24:07.517860    4443 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:24:07.518041    4443 out.go:303] Setting JSON to false
	I0610 07:24:07.518049    4443 mustload.go:65] Loading cluster: old-k8s-version-485000
	I0610 07:24:07.518231    4443 config.go:182] Loaded profile config "old-k8s-version-485000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0610 07:24:07.521359    4443 out.go:177] * The control plane node must be running for this command
	I0610 07:24:07.525405    4443 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-485000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-485000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-485000 -n old-k8s-version-485000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-485000 -n old-k8s-version-485000: exit status 7 (29.243417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-485000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-485000 -n old-k8s-version-485000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-485000 -n old-k8s-version-485000: exit status 7 (29.253292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-485000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-530000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-530000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (10.128179292s)

                                                
                                                
-- stdout --
	* [embed-certs-530000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node embed-certs-530000 in cluster embed-certs-530000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-530000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:24:07.973869    4475 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:24:07.973982    4475 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:24:07.973985    4475 out.go:309] Setting ErrFile to fd 2...
	I0610 07:24:07.973988    4475 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:24:07.974059    4475 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:24:07.975090    4475 out.go:303] Setting JSON to false
	I0610 07:24:07.990275    4475 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1417,"bootTime":1686405630,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:24:07.990358    4475 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:24:07.995716    4475 out.go:177] * [embed-certs-530000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:24:07.998731    4475 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:24:07.998808    4475 notify.go:220] Checking for updates...
	I0610 07:24:08.002673    4475 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:24:08.006621    4475 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:24:08.008057    4475 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:24:08.011664    4475 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:24:08.014671    4475 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:24:08.017997    4475 config.go:182] Loaded profile config "multinode-214000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:24:08.018061    4475 config.go:182] Loaded profile config "no-preload-457000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:24:08.018096    4475 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:24:08.022586    4475 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 07:24:08.029652    4475 start.go:297] selected driver: qemu2
	I0610 07:24:08.029658    4475 start.go:875] validating driver "qemu2" against <nil>
	I0610 07:24:08.029667    4475 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:24:08.031479    4475 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 07:24:08.034639    4475 out.go:177] * Automatically selected the socket_vmnet network
	I0610 07:24:08.037721    4475 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 07:24:08.037738    4475 cni.go:84] Creating CNI manager for ""
	I0610 07:24:08.037745    4475 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 07:24:08.037752    4475 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 07:24:08.037759    4475 start_flags.go:319] config:
	{Name:embed-certs-530000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:embed-certs-530000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:24:08.037845    4475 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:24:08.045799    4475 out.go:177] * Starting control plane node embed-certs-530000 in cluster embed-certs-530000
	I0610 07:24:08.049564    4475 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:24:08.049599    4475 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 07:24:08.049609    4475 cache.go:57] Caching tarball of preloaded images
	I0610 07:24:08.049660    4475 preload.go:174] Found /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 07:24:08.049665    4475 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 07:24:08.049717    4475 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/embed-certs-530000/config.json ...
	I0610 07:24:08.049728    4475 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/embed-certs-530000/config.json: {Name:mk4e7909268c9ffa21a883cd40e607f67ef77515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:24:08.049911    4475 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:24:08.049920    4475 start.go:364] acquiring machines lock for embed-certs-530000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:24:08.049948    4475 start.go:368] acquired machines lock for "embed-certs-530000" in 22.75µs
	I0610 07:24:08.049959    4475 start.go:93] Provisioning new machine with config: &{Name:embed-certs-530000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:embed-certs-530000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:24:08.049983    4475 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:24:08.058670    4475 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 07:24:08.074364    4475 start.go:159] libmachine.API.Create for "embed-certs-530000" (driver="qemu2")
	I0610 07:24:08.074393    4475 client.go:168] LocalClient.Create starting
	I0610 07:24:08.074446    4475 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:24:08.074467    4475 main.go:141] libmachine: Decoding PEM data...
	I0610 07:24:08.074477    4475 main.go:141] libmachine: Parsing certificate...
	I0610 07:24:08.074521    4475 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:24:08.074535    4475 main.go:141] libmachine: Decoding PEM data...
	I0610 07:24:08.074542    4475 main.go:141] libmachine: Parsing certificate...
	I0610 07:24:08.074831    4475 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:24:08.182098    4475 main.go:141] libmachine: Creating SSH key...
	I0610 07:24:08.238848    4475 main.go:141] libmachine: Creating Disk image...
	I0610 07:24:08.238856    4475 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:24:08.238999    4475 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/embed-certs-530000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/embed-certs-530000/disk.qcow2
	I0610 07:24:08.247485    4475 main.go:141] libmachine: STDOUT: 
	I0610 07:24:08.247505    4475 main.go:141] libmachine: STDERR: 
	I0610 07:24:08.247553    4475 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/embed-certs-530000/disk.qcow2 +20000M
	I0610 07:24:08.254981    4475 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:24:08.255001    4475 main.go:141] libmachine: STDERR: 
	I0610 07:24:08.255021    4475 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/embed-certs-530000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/embed-certs-530000/disk.qcow2
	I0610 07:24:08.255028    4475 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:24:08.255073    4475 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/embed-certs-530000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/embed-certs-530000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/embed-certs-530000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:38:0b:96:54:be -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/embed-certs-530000/disk.qcow2
	I0610 07:24:08.256613    4475 main.go:141] libmachine: STDOUT: 
	I0610 07:24:08.256628    4475 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:24:08.256644    4475 client.go:171] LocalClient.Create took 182.25175ms
	I0610 07:24:10.258754    4475 start.go:128] duration metric: createHost completed in 2.208823291s
	I0610 07:24:10.258867    4475 start.go:83] releasing machines lock for "embed-certs-530000", held for 2.208933292s
	W0610 07:24:10.258932    4475 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:24:10.266345    4475 out.go:177] * Deleting "embed-certs-530000" in qemu2 ...
	W0610 07:24:10.286368    4475 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:24:10.286395    4475 start.go:702] Will try again in 5 seconds ...
	I0610 07:24:15.288543    4475 start.go:364] acquiring machines lock for embed-certs-530000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:24:15.756859    4475 start.go:368] acquired machines lock for "embed-certs-530000" in 468.195167ms
	I0610 07:24:15.757048    4475 start.go:93] Provisioning new machine with config: &{Name:embed-certs-530000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:embed-certs-530000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mount
MSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:24:15.757358    4475 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:24:15.765844    4475 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 07:24:15.813973    4475 start.go:159] libmachine.API.Create for "embed-certs-530000" (driver="qemu2")
	I0610 07:24:15.814013    4475 client.go:168] LocalClient.Create starting
	I0610 07:24:15.814149    4475 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:24:15.814201    4475 main.go:141] libmachine: Decoding PEM data...
	I0610 07:24:15.814222    4475 main.go:141] libmachine: Parsing certificate...
	I0610 07:24:15.814305    4475 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:24:15.814334    4475 main.go:141] libmachine: Decoding PEM data...
	I0610 07:24:15.814349    4475 main.go:141] libmachine: Parsing certificate...
	I0610 07:24:15.814955    4475 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:24:15.933806    4475 main.go:141] libmachine: Creating SSH key...
	I0610 07:24:16.017408    4475 main.go:141] libmachine: Creating Disk image...
	I0610 07:24:16.017414    4475 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:24:16.017560    4475 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/embed-certs-530000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/embed-certs-530000/disk.qcow2
	I0610 07:24:16.026143    4475 main.go:141] libmachine: STDOUT: 
	I0610 07:24:16.026155    4475 main.go:141] libmachine: STDERR: 
	I0610 07:24:16.026217    4475 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/embed-certs-530000/disk.qcow2 +20000M
	I0610 07:24:16.033306    4475 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:24:16.033317    4475 main.go:141] libmachine: STDERR: 
	I0610 07:24:16.033329    4475 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/embed-certs-530000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/embed-certs-530000/disk.qcow2
	I0610 07:24:16.033336    4475 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:24:16.033376    4475 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/embed-certs-530000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/embed-certs-530000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/embed-certs-530000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:49:29:7d:6c:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/embed-certs-530000/disk.qcow2
	I0610 07:24:16.034861    4475 main.go:141] libmachine: STDOUT: 
	I0610 07:24:16.034873    4475 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:24:16.034886    4475 client.go:171] LocalClient.Create took 220.874917ms
	I0610 07:24:18.036978    4475 start.go:128] duration metric: createHost completed in 2.279653959s
	I0610 07:24:18.037034    4475 start.go:83] releasing machines lock for "embed-certs-530000", held for 2.2802195s
	W0610 07:24:18.037424    4475 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-530000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-530000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:24:18.047000    4475 out.go:177] 
	W0610 07:24:18.051982    4475 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:24:18.052019    4475 out.go:239] * 
	* 
	W0610 07:24:18.054742    4475 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:24:18.061847    4475 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-530000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-530000 -n embed-certs-530000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-530000 -n embed-certs-530000: exit status 7 (65.721042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-530000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-457000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-457000 -n no-preload-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-457000 -n no-preload-457000: exit status 7 (31.107875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-457000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-457000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-457000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.349708ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-457000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-457000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-457000 -n no-preload-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-457000 -n no-preload-457000: exit status 7 (28.418708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p no-preload-457000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p no-preload-457000 "sudo crictl images -o json": exit status 89 (37.497166ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-457000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p no-preload-457000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p no-preload-457000"
start_stop_delete_test.go:304: v1.27.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.2",
- 	"registry.k8s.io/kube-controller-manager:v1.27.2",
- 	"registry.k8s.io/kube-proxy:v1.27.2",
- 	"registry.k8s.io/kube-scheduler:v1.27.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-457000 -n no-preload-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-457000 -n no-preload-457000: exit status 7 (27.907ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-457000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-457000 --alsologtostderr -v=1: exit status 89 (39.564625ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p no-preload-457000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:24:12.736975    4496 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:24:12.737139    4496 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:24:12.737142    4496 out.go:309] Setting ErrFile to fd 2...
	I0610 07:24:12.737144    4496 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:24:12.737207    4496 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:24:12.737432    4496 out.go:303] Setting JSON to false
	I0610 07:24:12.737440    4496 mustload.go:65] Loading cluster: no-preload-457000
	I0610 07:24:12.737618    4496 config.go:182] Loaded profile config "no-preload-457000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:24:12.742231    4496 out.go:177] * The control plane node must be running for this command
	I0610 07:24:12.746224    4496 out.go:177]   To start a cluster, run: "minikube start -p no-preload-457000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-457000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-457000 -n no-preload-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-457000 -n no-preload-457000: exit status 7 (28.167833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-457000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-457000 -n no-preload-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-457000 -n no-preload-457000: exit status 7 (27.614167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-693000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-693000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (9.675418875s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-693000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node default-k8s-diff-port-693000 in cluster default-k8s-diff-port-693000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-693000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:24:13.435237    4531 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:24:13.435372    4531 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:24:13.435375    4531 out.go:309] Setting ErrFile to fd 2...
	I0610 07:24:13.435378    4531 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:24:13.435449    4531 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:24:13.436477    4531 out.go:303] Setting JSON to false
	I0610 07:24:13.451824    4531 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1423,"bootTime":1686405630,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:24:13.451903    4531 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:24:13.455769    4531 out.go:177] * [default-k8s-diff-port-693000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:24:13.462788    4531 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:24:13.462886    4531 notify.go:220] Checking for updates...
	I0610 07:24:13.466759    4531 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:24:13.469837    4531 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:24:13.472790    4531 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:24:13.475791    4531 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:24:13.478815    4531 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:24:13.482109    4531 config.go:182] Loaded profile config "embed-certs-530000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:24:13.482170    4531 config.go:182] Loaded profile config "multinode-214000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:24:13.482219    4531 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:24:13.485678    4531 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 07:24:13.492783    4531 start.go:297] selected driver: qemu2
	I0610 07:24:13.492788    4531 start.go:875] validating driver "qemu2" against <nil>
	I0610 07:24:13.492793    4531 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:24:13.494724    4531 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 07:24:13.496216    4531 out.go:177] * Automatically selected the socket_vmnet network
	I0610 07:24:13.499825    4531 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 07:24:13.499840    4531 cni.go:84] Creating CNI manager for ""
	I0610 07:24:13.499846    4531 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 07:24:13.499852    4531 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 07:24:13.499858    4531 start_flags.go:319] config:
	{Name:default-k8s-diff-port-693000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:default-k8s-diff-port-693000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Stat
icIP:}
	I0610 07:24:13.499952    4531 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:24:13.507749    4531 out.go:177] * Starting control plane node default-k8s-diff-port-693000 in cluster default-k8s-diff-port-693000
	I0610 07:24:13.511757    4531 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:24:13.511794    4531 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 07:24:13.511803    4531 cache.go:57] Caching tarball of preloaded images
	I0610 07:24:13.511874    4531 preload.go:174] Found /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 07:24:13.511880    4531 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 07:24:13.511953    4531 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/default-k8s-diff-port-693000/config.json ...
	I0610 07:24:13.511966    4531 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/default-k8s-diff-port-693000/config.json: {Name:mk3e0b9596cf22fddf69e7abb5c84640c1cb0eb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:24:13.512178    4531 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:24:13.512190    4531 start.go:364] acquiring machines lock for default-k8s-diff-port-693000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:24:13.512220    4531 start.go:368] acquired machines lock for "default-k8s-diff-port-693000" in 24.792µs
	I0610 07:24:13.512233    4531 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:default-k8s-diff-port-693000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:24:13.512260    4531 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:24:13.520802    4531 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 07:24:13.537493    4531 start.go:159] libmachine.API.Create for "default-k8s-diff-port-693000" (driver="qemu2")
	I0610 07:24:13.537511    4531 client.go:168] LocalClient.Create starting
	I0610 07:24:13.537568    4531 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:24:13.537588    4531 main.go:141] libmachine: Decoding PEM data...
	I0610 07:24:13.537598    4531 main.go:141] libmachine: Parsing certificate...
	I0610 07:24:13.537633    4531 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:24:13.537648    4531 main.go:141] libmachine: Decoding PEM data...
	I0610 07:24:13.537656    4531 main.go:141] libmachine: Parsing certificate...
	I0610 07:24:13.537971    4531 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:24:13.650141    4531 main.go:141] libmachine: Creating SSH key...
	I0610 07:24:13.737119    4531 main.go:141] libmachine: Creating Disk image...
	I0610 07:24:13.737127    4531 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:24:13.737278    4531 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/default-k8s-diff-port-693000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/default-k8s-diff-port-693000/disk.qcow2
	I0610 07:24:13.745674    4531 main.go:141] libmachine: STDOUT: 
	I0610 07:24:13.745689    4531 main.go:141] libmachine: STDERR: 
	I0610 07:24:13.745741    4531 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/default-k8s-diff-port-693000/disk.qcow2 +20000M
	I0610 07:24:13.752865    4531 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:24:13.752886    4531 main.go:141] libmachine: STDERR: 
	I0610 07:24:13.752906    4531 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/default-k8s-diff-port-693000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/default-k8s-diff-port-693000/disk.qcow2
	I0610 07:24:13.752912    4531 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:24:13.752955    4531 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/default-k8s-diff-port-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/default-k8s-diff-port-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/default-k8s-diff-port-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:c4:64:06:4f:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/default-k8s-diff-port-693000/disk.qcow2
	I0610 07:24:13.754456    4531 main.go:141] libmachine: STDOUT: 
	I0610 07:24:13.754471    4531 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:24:13.754490    4531 client.go:171] LocalClient.Create took 216.981375ms
	I0610 07:24:15.756612    4531 start.go:128] duration metric: createHost completed in 2.244398542s
	I0610 07:24:15.756696    4531 start.go:83] releasing machines lock for "default-k8s-diff-port-693000", held for 2.244537542s
	W0610 07:24:15.756778    4531 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:24:15.773774    4531 out.go:177] * Deleting "default-k8s-diff-port-693000" in qemu2 ...
	W0610 07:24:15.788451    4531 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:24:15.788494    4531 start.go:702] Will try again in 5 seconds ...
	I0610 07:24:20.790497    4531 start.go:364] acquiring machines lock for default-k8s-diff-port-693000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:24:20.790961    4531 start.go:368] acquired machines lock for "default-k8s-diff-port-693000" in 366.291µs
	I0610 07:24:20.791106    4531 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22
KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:default-k8s-diff-port-693000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:
docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:24:20.791481    4531 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:24:20.800894    4531 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 07:24:20.847630    4531 start.go:159] libmachine.API.Create for "default-k8s-diff-port-693000" (driver="qemu2")
	I0610 07:24:20.847690    4531 client.go:168] LocalClient.Create starting
	I0610 07:24:20.847789    4531 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:24:20.847841    4531 main.go:141] libmachine: Decoding PEM data...
	I0610 07:24:20.847859    4531 main.go:141] libmachine: Parsing certificate...
	I0610 07:24:20.847945    4531 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:24:20.847974    4531 main.go:141] libmachine: Decoding PEM data...
	I0610 07:24:20.847990    4531 main.go:141] libmachine: Parsing certificate...
	I0610 07:24:20.848523    4531 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:24:20.969705    4531 main.go:141] libmachine: Creating SSH key...
	I0610 07:24:21.025724    4531 main.go:141] libmachine: Creating Disk image...
	I0610 07:24:21.025729    4531 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:24:21.025866    4531 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/default-k8s-diff-port-693000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/default-k8s-diff-port-693000/disk.qcow2
	I0610 07:24:21.034328    4531 main.go:141] libmachine: STDOUT: 
	I0610 07:24:21.034342    4531 main.go:141] libmachine: STDERR: 
	I0610 07:24:21.034389    4531 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/default-k8s-diff-port-693000/disk.qcow2 +20000M
	I0610 07:24:21.041490    4531 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:24:21.041503    4531 main.go:141] libmachine: STDERR: 
	I0610 07:24:21.041516    4531 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/default-k8s-diff-port-693000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/default-k8s-diff-port-693000/disk.qcow2
	I0610 07:24:21.041526    4531 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:24:21.041561    4531 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/default-k8s-diff-port-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/default-k8s-diff-port-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/default-k8s-diff-port-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:78:35:41:67:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/default-k8s-diff-port-693000/disk.qcow2
	I0610 07:24:21.042988    4531 main.go:141] libmachine: STDOUT: 
	I0610 07:24:21.043001    4531 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:24:21.043012    4531 client.go:171] LocalClient.Create took 195.323875ms
	I0610 07:24:23.045181    4531 start.go:128] duration metric: createHost completed in 2.253731584s
	I0610 07:24:23.045257    4531 start.go:83] releasing machines lock for "default-k8s-diff-port-693000", held for 2.254343458s
	W0610 07:24:23.045594    4531 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-693000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-693000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:24:23.057252    4531 out.go:177] 
	W0610 07:24:23.058738    4531 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:24:23.058783    4531 out.go:239] * 
	* 
	W0610 07:24:23.061348    4531 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:24:23.072250    4531 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-693000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-693000 -n default-k8s-diff-port-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-693000 -n default-k8s-diff-port-693000: exit status 7 (69.767333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-530000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-530000 create -f testdata/busybox.yaml: exit status 1 (29.715791ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-530000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-530000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-530000 -n embed-certs-530000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-530000 -n embed-certs-530000: exit status 7 (28.126ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-530000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-530000 -n embed-certs-530000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-530000 -n embed-certs-530000: exit status 7 (27.929042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-530000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-530000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-530000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-530000 describe deploy/metrics-server -n kube-system: exit status 1 (26.53525ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-530000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-530000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-530000 -n embed-certs-530000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-530000 -n embed-certs-530000: exit status 7 (27.764166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-530000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-530000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-530000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (5.173390625s)

                                                
                                                
-- stdout --
	* [embed-certs-530000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node embed-certs-530000 in cluster embed-certs-530000
	* Restarting existing qemu2 VM for "embed-certs-530000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-530000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:24:18.520010    4562 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:24:18.520117    4562 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:24:18.520120    4562 out.go:309] Setting ErrFile to fd 2...
	I0610 07:24:18.520122    4562 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:24:18.520199    4562 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:24:18.521214    4562 out.go:303] Setting JSON to false
	I0610 07:24:18.536435    4562 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1428,"bootTime":1686405630,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:24:18.536516    4562 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:24:18.541297    4562 out.go:177] * [embed-certs-530000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:24:18.544297    4562 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:24:18.544383    4562 notify.go:220] Checking for updates...
	I0610 07:24:18.552251    4562 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:24:18.555295    4562 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:24:18.558324    4562 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:24:18.562210    4562 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:24:18.565252    4562 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:24:18.568587    4562 config.go:182] Loaded profile config "embed-certs-530000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:24:18.568819    4562 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:24:18.573198    4562 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 07:24:18.580237    4562 start.go:297] selected driver: qemu2
	I0610 07:24:18.580244    4562 start.go:875] validating driver "qemu2" against &{Name:embed-certs-530000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:embed-certs-530000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:24:18.580315    4562 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:24:18.582291    4562 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 07:24:18.582315    4562 cni.go:84] Creating CNI manager for ""
	I0610 07:24:18.582322    4562 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 07:24:18.582326    4562 start_flags.go:319] config:
	{Name:embed-certs-530000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:embed-certs-530000 Namespace:default
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/m
inikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:24:18.582412    4562 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:24:18.590173    4562 out.go:177] * Starting control plane node embed-certs-530000 in cluster embed-certs-530000
	I0610 07:24:18.594192    4562 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:24:18.594230    4562 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 07:24:18.594246    4562 cache.go:57] Caching tarball of preloaded images
	I0610 07:24:18.594305    4562 preload.go:174] Found /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 07:24:18.594310    4562 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 07:24:18.594387    4562 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/embed-certs-530000/config.json ...
	I0610 07:24:18.594745    4562 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:24:18.594755    4562 start.go:364] acquiring machines lock for embed-certs-530000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:24:18.594787    4562 start.go:368] acquired machines lock for "embed-certs-530000" in 27.209µs
	I0610 07:24:18.594798    4562 start.go:96] Skipping create...Using existing machine configuration
	I0610 07:24:18.594803    4562 fix.go:55] fixHost starting: 
	I0610 07:24:18.594908    4562 fix.go:103] recreateIfNeeded on embed-certs-530000: state=Stopped err=<nil>
	W0610 07:24:18.594916    4562 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 07:24:18.601141    4562 out.go:177] * Restarting existing qemu2 VM for "embed-certs-530000" ...
	I0610 07:24:18.605293    4562 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/embed-certs-530000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/embed-certs-530000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/embed-certs-530000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:49:29:7d:6c:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/embed-certs-530000/disk.qcow2
	I0610 07:24:18.607115    4562 main.go:141] libmachine: STDOUT: 
	I0610 07:24:18.607136    4562 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:24:18.607164    4562 fix.go:57] fixHost completed within 12.362ms
	I0610 07:24:18.607169    4562 start.go:83] releasing machines lock for "embed-certs-530000", held for 12.378042ms
	W0610 07:24:18.607175    4562 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:24:18.607209    4562 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:24:18.607214    4562 start.go:702] Will try again in 5 seconds ...
	I0610 07:24:23.609116    4562 start.go:364] acquiring machines lock for embed-certs-530000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:24:23.616192    4562 start.go:368] acquired machines lock for "embed-certs-530000" in 7.035792ms
	I0610 07:24:23.616213    4562 start.go:96] Skipping create...Using existing machine configuration
	I0610 07:24:23.616217    4562 fix.go:55] fixHost starting: 
	I0610 07:24:23.616367    4562 fix.go:103] recreateIfNeeded on embed-certs-530000: state=Stopped err=<nil>
	W0610 07:24:23.616373    4562 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 07:24:23.628245    4562 out.go:177] * Restarting existing qemu2 VM for "embed-certs-530000" ...
	I0610 07:24:23.632276    4562 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/embed-certs-530000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/embed-certs-530000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/embed-certs-530000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:49:29:7d:6c:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/embed-certs-530000/disk.qcow2
	I0610 07:24:23.634419    4562 main.go:141] libmachine: STDOUT: 
	I0610 07:24:23.634437    4562 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:24:23.634453    4562 fix.go:57] fixHost completed within 18.236667ms
	I0610 07:24:23.634458    4562 start.go:83] releasing machines lock for "embed-certs-530000", held for 18.261541ms
	W0610 07:24:23.634504    4562 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-530000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-530000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:24:23.642235    4562 out.go:177] 
	W0610 07:24:23.646305    4562 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:24:23.646314    4562 out.go:239] * 
	* 
	W0610 07:24:23.646993    4562 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:24:23.660255    4562 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-530000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-530000 -n embed-certs-530000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-530000 -n embed-certs-530000: exit status 7 (35.557125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-530000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-693000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-693000 create -f testdata/busybox.yaml: exit status 1 (29.485ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-693000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-693000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-693000 -n default-k8s-diff-port-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-693000 -n default-k8s-diff-port-693000: exit status 7 (29.039417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-693000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-693000 -n default-k8s-diff-port-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-693000 -n default-k8s-diff-port-693000: exit status 7 (28.340584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-693000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-693000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-693000 describe deploy/metrics-server -n kube-system: exit status 1 (26.291292ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-693000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-693000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-693000 -n default-k8s-diff-port-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-693000 -n default-k8s-diff-port-693000: exit status 7 (28.075291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-693000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-693000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (5.174268417s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-693000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node default-k8s-diff-port-693000 in cluster default-k8s-diff-port-693000
	* Restarting existing qemu2 VM for "default-k8s-diff-port-693000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-693000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:24:23.528389    4591 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:24:23.528505    4591 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:24:23.528508    4591 out.go:309] Setting ErrFile to fd 2...
	I0610 07:24:23.528510    4591 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:24:23.528581    4591 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:24:23.529440    4591 out.go:303] Setting JSON to false
	I0610 07:24:23.544508    4591 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1433,"bootTime":1686405630,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:24:23.544580    4591 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:24:23.549297    4591 out.go:177] * [default-k8s-diff-port-693000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:24:23.556296    4591 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:24:23.560245    4591 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:24:23.556382    4591 notify.go:220] Checking for updates...
	I0610 07:24:23.566250    4591 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:24:23.569250    4591 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:24:23.570619    4591 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:24:23.573223    4591 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:24:23.576522    4591 config.go:182] Loaded profile config "default-k8s-diff-port-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:24:23.576760    4591 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:24:23.581069    4591 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 07:24:23.588265    4591 start.go:297] selected driver: qemu2
	I0610 07:24:23.588269    4591 start.go:875] validating driver "qemu2" against &{Name:default-k8s-diff-port-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kub
ernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:default-k8s-diff-port-693000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subne
t: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:24:23.588330    4591 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:24:23.590198    4591 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0610 07:24:23.590220    4591 cni.go:84] Creating CNI manager for ""
	I0610 07:24:23.590226    4591 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 07:24:23.590231    4591 start_flags.go:319] config:
	{Name:default-k8s-diff-port-693000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:default-k8s-diff-port-6930
00 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:24:23.590310    4591 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:24:23.598223    4591 out.go:177] * Starting control plane node default-k8s-diff-port-693000 in cluster default-k8s-diff-port-693000
	I0610 07:24:23.602249    4591 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:24:23.602269    4591 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 07:24:23.602286    4591 cache.go:57] Caching tarball of preloaded images
	I0610 07:24:23.602345    4591 preload.go:174] Found /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 07:24:23.602349    4591 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 07:24:23.602401    4591 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/default-k8s-diff-port-693000/config.json ...
	I0610 07:24:23.602703    4591 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:24:23.602711    4591 start.go:364] acquiring machines lock for default-k8s-diff-port-693000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:24:23.602740    4591 start.go:368] acquired machines lock for "default-k8s-diff-port-693000" in 23.916µs
	I0610 07:24:23.602752    4591 start.go:96] Skipping create...Using existing machine configuration
	I0610 07:24:23.602756    4591 fix.go:55] fixHost starting: 
	I0610 07:24:23.602866    4591 fix.go:103] recreateIfNeeded on default-k8s-diff-port-693000: state=Stopped err=<nil>
	W0610 07:24:23.602876    4591 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 07:24:23.610243    4591 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-693000" ...
	I0610 07:24:23.614209    4591 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/default-k8s-diff-port-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/default-k8s-diff-port-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/default-k8s-diff-port-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:78:35:41:67:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/default-k8s-diff-port-693000/disk.qcow2
	I0610 07:24:23.616110    4591 main.go:141] libmachine: STDOUT: 
	I0610 07:24:23.616130    4591 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:24:23.616161    4591 fix.go:57] fixHost completed within 13.403916ms
	I0610 07:24:23.616166    4591 start.go:83] releasing machines lock for "default-k8s-diff-port-693000", held for 13.422625ms
	W0610 07:24:23.616174    4591 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:24:23.616205    4591 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:24:23.616209    4591 start.go:702] Will try again in 5 seconds ...
	I0610 07:24:28.618174    4591 start.go:364] acquiring machines lock for default-k8s-diff-port-693000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:24:28.618582    4591 start.go:368] acquired machines lock for "default-k8s-diff-port-693000" in 338.041µs
	I0610 07:24:28.618704    4591 start.go:96] Skipping create...Using existing machine configuration
	I0610 07:24:28.618727    4591 fix.go:55] fixHost starting: 
	I0610 07:24:28.619439    4591 fix.go:103] recreateIfNeeded on default-k8s-diff-port-693000: state=Stopped err=<nil>
	W0610 07:24:28.619463    4591 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 07:24:28.629840    4591 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-693000" ...
	I0610 07:24:28.633015    4591 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/default-k8s-diff-port-693000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/default-k8s-diff-port-693000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/default-k8s-diff-port-693000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:78:35:41:67:b0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/default-k8s-diff-port-693000/disk.qcow2
	I0610 07:24:28.641981    4591 main.go:141] libmachine: STDOUT: 
	I0610 07:24:28.642038    4591 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:24:28.642117    4591 fix.go:57] fixHost completed within 23.393042ms
	I0610 07:24:28.642135    4591 start.go:83] releasing machines lock for "default-k8s-diff-port-693000", held for 23.532709ms
	W0610 07:24:28.642300    4591 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-693000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-693000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:24:28.650745    4591 out.go:177] 
	W0610 07:24:28.653882    4591 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:24:28.653933    4591 out.go:239] * 
	* 
	W0610 07:24:28.656393    4591 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:24:28.663795    4591 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-693000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-693000 -n default-k8s-diff-port-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-693000 -n default-k8s-diff-port-693000: exit status 7 (65.46525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-530000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-530000 -n embed-certs-530000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-530000 -n embed-certs-530000: exit status 7 (28.022167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-530000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-530000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-530000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-530000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.80925ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-530000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-530000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-530000 -n embed-certs-530000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-530000 -n embed-certs-530000: exit status 7 (28.720292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-530000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p embed-certs-530000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p embed-certs-530000 "sudo crictl images -o json": exit status 89 (38.512459ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-530000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p embed-certs-530000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p embed-certs-530000"
start_stop_delete_test.go:304: v1.27.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.2",
- 	"registry.k8s.io/kube-controller-manager:v1.27.2",
- 	"registry.k8s.io/kube-proxy:v1.27.2",
- 	"registry.k8s.io/kube-scheduler:v1.27.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-530000 -n embed-certs-530000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-530000 -n embed-certs-530000: exit status 7 (28.279209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-530000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-530000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-530000 --alsologtostderr -v=1: exit status 89 (40.177125ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p embed-certs-530000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:24:23.879970    4609 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:24:23.880106    4609 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:24:23.880109    4609 out.go:309] Setting ErrFile to fd 2...
	I0610 07:24:23.880112    4609 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:24:23.880184    4609 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:24:23.880404    4609 out.go:303] Setting JSON to false
	I0610 07:24:23.880412    4609 mustload.go:65] Loading cluster: embed-certs-530000
	I0610 07:24:23.880590    4609 config.go:182] Loaded profile config "embed-certs-530000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:24:23.884935    4609 out.go:177] * The control plane node must be running for this command
	I0610 07:24:23.889058    4609 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-530000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-530000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-530000 -n embed-certs-530000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-530000 -n embed-certs-530000: exit status 7 (27.869667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-530000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-530000 -n embed-certs-530000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-530000 -n embed-certs-530000: exit status 7 (28.230584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-530000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-675000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-675000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (10.029115084s)

                                                
                                                
-- stdout --
	* [newest-cni-675000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting control plane node newest-cni-675000 in cluster newest-cni-675000
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-675000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:24:24.336152    4632 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:24:24.336273    4632 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:24:24.336276    4632 out.go:309] Setting ErrFile to fd 2...
	I0610 07:24:24.336279    4632 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:24:24.336356    4632 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:24:24.337429    4632 out.go:303] Setting JSON to false
	I0610 07:24:24.352411    4632 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1434,"bootTime":1686405630,"procs":405,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:24:24.352470    4632 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:24:24.356812    4632 out.go:177] * [newest-cni-675000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:24:24.363835    4632 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:24:24.363834    4632 notify.go:220] Checking for updates...
	I0610 07:24:24.367766    4632 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:24:24.370786    4632 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:24:24.373796    4632 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:24:24.376838    4632 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:24:24.379780    4632 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:24:24.383109    4632 config.go:182] Loaded profile config "default-k8s-diff-port-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:24:24.383177    4632 config.go:182] Loaded profile config "multinode-214000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:24:24.383220    4632 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:24:24.387812    4632 out.go:177] * Using the qemu2 driver based on user configuration
	I0610 07:24:24.394735    4632 start.go:297] selected driver: qemu2
	I0610 07:24:24.394742    4632 start.go:875] validating driver "qemu2" against <nil>
	I0610 07:24:24.394748    4632 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:24:24.396616    4632 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	W0610 07:24:24.396633    4632 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0610 07:24:24.403667    4632 out.go:177] * Automatically selected the socket_vmnet network
	I0610 07:24:24.406860    4632 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0610 07:24:24.406881    4632 cni.go:84] Creating CNI manager for ""
	I0610 07:24:24.406895    4632 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 07:24:24.406898    4632 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0610 07:24:24.406904    4632 start_flags.go:319] config:
	{Name:newest-cni-675000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:newest-cni-675000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:d
ocker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/s
ocket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:24:24.407004    4632 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:24:24.425825    4632 out.go:177] * Starting control plane node newest-cni-675000 in cluster newest-cni-675000
	I0610 07:24:24.429721    4632 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:24:24.429745    4632 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 07:24:24.429756    4632 cache.go:57] Caching tarball of preloaded images
	I0610 07:24:24.429818    4632 preload.go:174] Found /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 07:24:24.429823    4632 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 07:24:24.429892    4632 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/newest-cni-675000/config.json ...
	I0610 07:24:24.429905    4632 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/newest-cni-675000/config.json: {Name:mk87f7bfdc2ee05eab9317b49b7b73dcde1912ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:24:24.430129    4632 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:24:24.430141    4632 start.go:364] acquiring machines lock for newest-cni-675000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:24:24.430174    4632 start.go:368] acquired machines lock for "newest-cni-675000" in 26.792µs
	I0610 07:24:24.430188    4632 start.go:93] Provisioning new machine with config: &{Name:newest-cni-675000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:newest-cni-675000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:24:24.430231    4632 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:24:24.438800    4632 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 07:24:24.455718    4632 start.go:159] libmachine.API.Create for "newest-cni-675000" (driver="qemu2")
	I0610 07:24:24.455739    4632 client.go:168] LocalClient.Create starting
	I0610 07:24:24.455792    4632 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:24:24.455812    4632 main.go:141] libmachine: Decoding PEM data...
	I0610 07:24:24.455823    4632 main.go:141] libmachine: Parsing certificate...
	I0610 07:24:24.455860    4632 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:24:24.455880    4632 main.go:141] libmachine: Decoding PEM data...
	I0610 07:24:24.455889    4632 main.go:141] libmachine: Parsing certificate...
	I0610 07:24:24.456187    4632 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:24:24.565825    4632 main.go:141] libmachine: Creating SSH key...
	I0610 07:24:24.807025    4632 main.go:141] libmachine: Creating Disk image...
	I0610 07:24:24.807040    4632 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:24:24.807225    4632 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/newest-cni-675000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/newest-cni-675000/disk.qcow2
	I0610 07:24:24.816514    4632 main.go:141] libmachine: STDOUT: 
	I0610 07:24:24.816536    4632 main.go:141] libmachine: STDERR: 
	I0610 07:24:24.816617    4632 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/newest-cni-675000/disk.qcow2 +20000M
	I0610 07:24:24.824025    4632 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:24:24.824038    4632 main.go:141] libmachine: STDERR: 
	I0610 07:24:24.824060    4632 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/newest-cni-675000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/newest-cni-675000/disk.qcow2
	I0610 07:24:24.824080    4632 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:24:24.824121    4632 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/newest-cni-675000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/newest-cni-675000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/newest-cni-675000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:fb:04:75:1a:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/newest-cni-675000/disk.qcow2
	I0610 07:24:24.825579    4632 main.go:141] libmachine: STDOUT: 
	I0610 07:24:24.825593    4632 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:24:24.825611    4632 client.go:171] LocalClient.Create took 369.877667ms
	I0610 07:24:26.827727    4632 start.go:128] duration metric: createHost completed in 2.397555666s
	I0610 07:24:26.827780    4632 start.go:83] releasing machines lock for "newest-cni-675000", held for 2.397674083s
	W0610 07:24:26.827836    4632 start.go:687] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:24:26.839111    4632 out.go:177] * Deleting "newest-cni-675000" in qemu2 ...
	W0610 07:24:26.859490    4632 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:24:26.859523    4632 start.go:702] Will try again in 5 seconds ...
	I0610 07:24:31.861640    4632 start.go:364] acquiring machines lock for newest-cni-675000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:24:31.862161    4632 start.go:368] acquired machines lock for "newest-cni-675000" in 406.167µs
	I0610 07:24:31.862337    4632 start.go:93] Provisioning new machine with config: &{Name:newest-cni-675000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.27.2 ClusterName:newest-cni-675000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountS
tring:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0610 07:24:31.862642    4632 start.go:125] createHost starting for "" (driver="qemu2")
	I0610 07:24:31.868329    4632 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0610 07:24:31.916798    4632 start.go:159] libmachine.API.Create for "newest-cni-675000" (driver="qemu2")
	I0610 07:24:31.916833    4632 client.go:168] LocalClient.Create starting
	I0610 07:24:31.916943    4632 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/ca.pem
	I0610 07:24:31.916989    4632 main.go:141] libmachine: Decoding PEM data...
	I0610 07:24:31.917017    4632 main.go:141] libmachine: Parsing certificate...
	I0610 07:24:31.917129    4632 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15074-894/.minikube/certs/cert.pem
	I0610 07:24:31.917157    4632 main.go:141] libmachine: Decoding PEM data...
	I0610 07:24:31.917179    4632 main.go:141] libmachine: Parsing certificate...
	I0610 07:24:31.917750    4632 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/15074-894/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso...
	I0610 07:24:32.042212    4632 main.go:141] libmachine: Creating SSH key...
	I0610 07:24:32.283773    4632 main.go:141] libmachine: Creating Disk image...
	I0610 07:24:32.283782    4632 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0610 07:24:32.283922    4632 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/15074-894/.minikube/machines/newest-cni-675000/disk.qcow2.raw /Users/jenkins/minikube-integration/15074-894/.minikube/machines/newest-cni-675000/disk.qcow2
	I0610 07:24:32.292870    4632 main.go:141] libmachine: STDOUT: 
	I0610 07:24:32.292899    4632 main.go:141] libmachine: STDERR: 
	I0610 07:24:32.292950    4632 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/newest-cni-675000/disk.qcow2 +20000M
	I0610 07:24:32.300189    4632 main.go:141] libmachine: STDOUT: Image resized.
	
	I0610 07:24:32.300206    4632 main.go:141] libmachine: STDERR: 
	I0610 07:24:32.300218    4632 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/15074-894/.minikube/machines/newest-cni-675000/disk.qcow2.raw and /Users/jenkins/minikube-integration/15074-894/.minikube/machines/newest-cni-675000/disk.qcow2
	I0610 07:24:32.300226    4632 main.go:141] libmachine: Starting QEMU VM...
	I0610 07:24:32.300262    4632 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/newest-cni-675000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/newest-cni-675000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/newest-cni-675000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:3d:de:a8:02:d6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/newest-cni-675000/disk.qcow2
	I0610 07:24:32.301738    4632 main.go:141] libmachine: STDOUT: 
	I0610 07:24:32.301758    4632 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:24:32.301771    4632 client.go:171] LocalClient.Create took 384.946041ms
	I0610 07:24:34.303867    4632 start.go:128] duration metric: createHost completed in 2.441280375s
	I0610 07:24:34.303933    4632 start.go:83] releasing machines lock for "newest-cni-675000", held for 2.441825334s
	W0610 07:24:34.304288    4632 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-675000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-675000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:24:34.309334    4632 out.go:177] 
	W0610 07:24:34.314412    4632 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:24:34.314480    4632 out.go:239] * 
	* 
	W0610 07:24:34.316479    4632 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:24:34.325303    4632 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-675000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-675000 -n newest-cni-675000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-675000 -n newest-cni-675000: exit status 7 (65.211542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-675000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-693000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-693000 -n default-k8s-diff-port-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-693000 -n default-k8s-diff-port-693000: exit status 7 (31.229917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-693000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-693000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-693000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.185875ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-693000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-693000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-693000 -n default-k8s-diff-port-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-693000 -n default-k8s-diff-port-693000: exit status 7 (28.22825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-693000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-693000 "sudo crictl images -o json": exit status 89 (38.49475ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-693000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p default-k8s-diff-port-693000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p default-k8s-diff-port-693000"
start_stop_delete_test.go:304: v1.27.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.2",
- 	"registry.k8s.io/kube-controller-manager:v1.27.2",
- 	"registry.k8s.io/kube-proxy:v1.27.2",
- 	"registry.k8s.io/kube-scheduler:v1.27.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-693000 -n default-k8s-diff-port-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-693000 -n default-k8s-diff-port-693000: exit status 7 (28.058709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-693000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-693000 --alsologtostderr -v=1: exit status 89 (39.390041ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-693000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:24:28.924145    4653 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:24:28.924260    4653 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:24:28.924263    4653 out.go:309] Setting ErrFile to fd 2...
	I0610 07:24:28.924266    4653 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:24:28.924337    4653 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:24:28.924560    4653 out.go:303] Setting JSON to false
	I0610 07:24:28.924569    4653 mustload.go:65] Loading cluster: default-k8s-diff-port-693000
	I0610 07:24:28.924731    4653 config.go:182] Loaded profile config "default-k8s-diff-port-693000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:24:28.928013    4653 out.go:177] * The control plane node must be running for this command
	I0610 07:24:28.932174    4653 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-693000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-693000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-693000 -n default-k8s-diff-port-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-693000 -n default-k8s-diff-port-693000: exit status 7 (27.562583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-693000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-693000 -n default-k8s-diff-port-693000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-693000 -n default-k8s-diff-port-693000: exit status 7 (27.70925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-693000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-675000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-675000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.2: exit status 80 (5.175301208s)

                                                
                                                
-- stdout --
	* [newest-cni-675000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting control plane node newest-cni-675000 in cluster newest-cni-675000
	* Restarting existing qemu2 VM for "newest-cni-675000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-675000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:24:34.650089    4690 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:24:34.650189    4690 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:24:34.650191    4690 out.go:309] Setting ErrFile to fd 2...
	I0610 07:24:34.650194    4690 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:24:34.650258    4690 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:24:34.651209    4690 out.go:303] Setting JSON to false
	I0610 07:24:34.666987    4690 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1444,"bootTime":1686405630,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:24:34.667074    4690 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:24:34.671691    4690 out.go:177] * [newest-cni-675000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:24:34.678575    4690 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:24:34.678619    4690 notify.go:220] Checking for updates...
	I0610 07:24:34.685506    4690 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:24:34.688597    4690 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:24:34.691601    4690 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:24:34.694531    4690 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:24:34.697582    4690 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:24:34.700979    4690 config.go:182] Loaded profile config "newest-cni-675000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:24:34.701227    4690 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:24:34.705540    4690 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 07:24:34.712629    4690 start.go:297] selected driver: qemu2
	I0610 07:24:34.712634    4690 start.go:875] validating driver "qemu2" against &{Name:newest-cni-675000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:newest-cni-675000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<n
il> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:24:34.712683    4690 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:24:34.714626    4690 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0610 07:24:34.714646    4690 cni.go:84] Creating CNI manager for ""
	I0610 07:24:34.714652    4690 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 07:24:34.714658    4690 start_flags.go:319] config:
	{Name:newest-cni-675000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:newest-cni-675000 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeReques
ted:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:24:34.714742    4690 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:24:34.718628    4690 out.go:177] * Starting control plane node newest-cni-675000 in cluster newest-cni-675000
	I0610 07:24:34.725470    4690 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:24:34.725492    4690 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 07:24:34.725504    4690 cache.go:57] Caching tarball of preloaded images
	I0610 07:24:34.725560    4690 preload.go:174] Found /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0610 07:24:34.725565    4690 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 07:24:34.725628    4690 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/newest-cni-675000/config.json ...
	I0610 07:24:34.725969    4690 cache.go:195] Successfully downloaded all kic artifacts
	I0610 07:24:34.725978    4690 start.go:364] acquiring machines lock for newest-cni-675000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:24:34.726003    4690 start.go:368] acquired machines lock for "newest-cni-675000" in 19.666µs
	I0610 07:24:34.726013    4690 start.go:96] Skipping create...Using existing machine configuration
	I0610 07:24:34.726018    4690 fix.go:55] fixHost starting: 
	I0610 07:24:34.726159    4690 fix.go:103] recreateIfNeeded on newest-cni-675000: state=Stopped err=<nil>
	W0610 07:24:34.726167    4690 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 07:24:34.730638    4690 out.go:177] * Restarting existing qemu2 VM for "newest-cni-675000" ...
	I0610 07:24:34.737638    4690 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/newest-cni-675000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/newest-cni-675000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/newest-cni-675000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:3d:de:a8:02:d6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/newest-cni-675000/disk.qcow2
	I0610 07:24:34.739602    4690 main.go:141] libmachine: STDOUT: 
	I0610 07:24:34.739621    4690 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:24:34.739653    4690 fix.go:57] fixHost completed within 13.634333ms
	I0610 07:24:34.739657    4690 start.go:83] releasing machines lock for "newest-cni-675000", held for 13.651208ms
	W0610 07:24:34.739664    4690 start.go:687] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:24:34.739709    4690 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:24:34.739714    4690 start.go:702] Will try again in 5 seconds ...
	I0610 07:24:39.741757    4690 start.go:364] acquiring machines lock for newest-cni-675000: {Name:mkf0fcdbd5cb8a682ac13892d6b8f3e042ace8c6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0610 07:24:39.742095    4690 start.go:368] acquired machines lock for "newest-cni-675000" in 247.917µs
	I0610 07:24:39.742214    4690 start.go:96] Skipping create...Using existing machine configuration
	I0610 07:24:39.742235    4690 fix.go:55] fixHost starting: 
	I0610 07:24:39.743026    4690 fix.go:103] recreateIfNeeded on newest-cni-675000: state=Stopped err=<nil>
	W0610 07:24:39.743054    4690 fix.go:129] unexpected machine state, will restart: <nil>
	I0610 07:24:39.751313    4690 out.go:177] * Restarting existing qemu2 VM for "newest-cni-675000" ...
	I0610 07:24:39.755746    4690 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/7.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/15074-894/.minikube/machines/newest-cni-675000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/newest-cni-675000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/15074-894/.minikube/machines/newest-cni-675000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:3d:de:a8:02:d6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/15074-894/.minikube/machines/newest-cni-675000/disk.qcow2
	I0610 07:24:39.764731    4690 main.go:141] libmachine: STDOUT: 
	I0610 07:24:39.764777    4690 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0610 07:24:39.764845    4690 fix.go:57] fixHost completed within 22.611625ms
	I0610 07:24:39.764861    4690 start.go:83] releasing machines lock for "newest-cni-675000", held for 22.744292ms
	W0610 07:24:39.765002    4690 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-675000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-675000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0610 07:24:39.772368    4690 out.go:177] 
	W0610 07:24:39.776523    4690 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0610 07:24:39.776575    4690 out.go:239] * 
	* 
	W0610 07:24:39.779279    4690 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:24:39.786412    4690 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-675000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.27.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-675000 -n newest-cni-675000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-675000 -n newest-cni-675000: exit status 7 (67.49075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-675000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 ssh -p newest-cni-675000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p newest-cni-675000 "sudo crictl images -o json": exit status 89 (45.372833ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-675000"

                                                
                                                
-- /stdout --
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-darwin-arm64 ssh -p newest-cni-675000 \"sudo crictl images -o json\"": exit status 89
start_stop_delete_test.go:304: failed to decode images json invalid character '*' looking for beginning of value. output:
* The control plane node must be running for this command
To start a cluster, run: "minikube start -p newest-cni-675000"
start_stop_delete_test.go:304: v1.27.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.7-0",
- 	"registry.k8s.io/kube-apiserver:v1.27.2",
- 	"registry.k8s.io/kube-controller-manager:v1.27.2",
- 	"registry.k8s.io/kube-proxy:v1.27.2",
- 	"registry.k8s.io/kube-scheduler:v1.27.2",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-675000 -n newest-cni-675000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-675000 -n newest-cni-675000: exit status 7 (28.779292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-675000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-675000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-675000 --alsologtostderr -v=1: exit status 89 (40.686875ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p newest-cni-675000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:24:39.970212    4703 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:24:39.970352    4703 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:24:39.970355    4703 out.go:309] Setting ErrFile to fd 2...
	I0610 07:24:39.970357    4703 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:24:39.970425    4703 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:24:39.970637    4703 out.go:303] Setting JSON to false
	I0610 07:24:39.970645    4703 mustload.go:65] Loading cluster: newest-cni-675000
	I0610 07:24:39.970814    4703 config.go:182] Loaded profile config "newest-cni-675000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:24:39.974717    4703 out.go:177] * The control plane node must be running for this command
	I0610 07:24:39.978754    4703 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-675000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-675000 --alsologtostderr -v=1 failed: exit status 89
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-675000 -n newest-cni-675000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-675000 -n newest-cni-675000: exit status 7 (28.90125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-675000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-675000 -n newest-cni-675000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-675000 -n newest-cni-675000: exit status 7 (29.026542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-675000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (135/242)

Order passed test Duration
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.09
10 TestDownloadOnly/v1.27.2/json-events 15.61
11 TestDownloadOnly/v1.27.2/preload-exists 0
14 TestDownloadOnly/v1.27.2/kubectl 0
15 TestDownloadOnly/v1.27.2/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.24
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.23
19 TestBinaryMirror 0.35
29 TestHyperKitDriverInstallOrUpdate 9.15
32 TestErrorSpam/setup 29.69
33 TestErrorSpam/start 0.37
34 TestErrorSpam/status 0.25
35 TestErrorSpam/pause 0.64
36 TestErrorSpam/unpause 0.64
37 TestErrorSpam/stop 3.23
40 TestFunctional/serial/CopySyncFile 0
41 TestFunctional/serial/StartWithProxy 55.39
42 TestFunctional/serial/AuditLog 0
43 TestFunctional/serial/SoftStart 36.85
44 TestFunctional/serial/KubeContext 0.03
45 TestFunctional/serial/KubectlGetPods 0.05
48 TestFunctional/serial/CacheCmd/cache/add_remote 5.8
49 TestFunctional/serial/CacheCmd/cache/add_local 1.27
50 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.03
51 TestFunctional/serial/CacheCmd/cache/list 0.03
52 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
53 TestFunctional/serial/CacheCmd/cache/cache_reload 1.25
54 TestFunctional/serial/CacheCmd/cache/delete 0.07
55 TestFunctional/serial/MinikubeKubectlCmd 0.49
56 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.56
57 TestFunctional/serial/ExtraConfig 36.85
58 TestFunctional/serial/ComponentHealth 0.04
59 TestFunctional/serial/LogsCmd 0.65
60 TestFunctional/serial/LogsFileCmd 0.65
62 TestFunctional/parallel/ConfigCmd 0.2
63 TestFunctional/parallel/DashboardCmd 8.82
64 TestFunctional/parallel/DryRun 0.22
65 TestFunctional/parallel/InternationalLanguage 0.11
66 TestFunctional/parallel/StatusCmd 0.24
71 TestFunctional/parallel/AddonsCmd 0.12
72 TestFunctional/parallel/PersistentVolumeClaim 24.03
74 TestFunctional/parallel/SSHCmd 0.13
75 TestFunctional/parallel/CpCmd 0.28
77 TestFunctional/parallel/FileSync 0.07
78 TestFunctional/parallel/CertSync 0.4
82 TestFunctional/parallel/NodeLabels 0.04
84 TestFunctional/parallel/NonActiveRuntimeDisabled 0.07
86 TestFunctional/parallel/License 0.57
88 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.22
89 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
91 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.09
92 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
93 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
94 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
95 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
96 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
97 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
98 TestFunctional/parallel/ServiceCmd/DeployApp 6.1
99 TestFunctional/parallel/ServiceCmd/List 0.32
100 TestFunctional/parallel/ServiceCmd/JSONOutput 0.3
101 TestFunctional/parallel/ServiceCmd/HTTPS 0.11
102 TestFunctional/parallel/ServiceCmd/Format 0.1
103 TestFunctional/parallel/ServiceCmd/URL 0.1
104 TestFunctional/parallel/ProfileCmd/profile_not_create 0.19
105 TestFunctional/parallel/ProfileCmd/profile_list 0.15
106 TestFunctional/parallel/ProfileCmd/profile_json_output 0.14
107 TestFunctional/parallel/MountCmd/any-port 7.1
108 TestFunctional/parallel/MountCmd/specific-port 0.97
110 TestFunctional/parallel/Version/short 0.04
111 TestFunctional/parallel/Version/components 0.18
112 TestFunctional/parallel/ImageCommands/ImageListShort 0.09
113 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
115 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
116 TestFunctional/parallel/ImageCommands/ImageBuild 2.67
117 TestFunctional/parallel/ImageCommands/Setup 2.78
118 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.24
119 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.59
120 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.49
121 TestFunctional/parallel/DockerEnv/bash 0.35
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.06
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
125 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.49
126 TestFunctional/parallel/ImageCommands/ImageRemove 0.15
127 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.62
128 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.58
129 TestFunctional/delete_addon-resizer_images 0.12
130 TestFunctional/delete_my-image_image 0.04
131 TestFunctional/delete_minikube_cached_images 0.04
135 TestImageBuild/serial/Setup 29.38
136 TestImageBuild/serial/NormalBuild 1.99
138 TestImageBuild/serial/BuildWithDockerIgnore 0.11
139 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.1
142 TestIngressAddonLegacy/StartLegacyK8sCluster 84.41
144 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 20.82
145 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.21
149 TestJSONOutput/start/Command 43.75
150 TestJSONOutput/start/Audit 0
152 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
153 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
155 TestJSONOutput/pause/Command 0.29
156 TestJSONOutput/pause/Audit 0
158 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/unpause/Command 0.23
162 TestJSONOutput/unpause/Audit 0
164 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/stop/Command 9.08
168 TestJSONOutput/stop/Audit 0
170 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
172 TestErrorJSONOutput 0.35
177 TestMainNoArgs 0.03
178 TestMinikubeProfile 61.5
234 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
238 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
239 TestNoKubernetes/serial/ProfileList 0.15
240 TestNoKubernetes/serial/Stop 0.06
242 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
260 TestStartStop/group/old-k8s-version/serial/Stop 0.06
261 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.09
265 TestStartStop/group/no-preload/serial/Stop 0.06
266 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
282 TestStartStop/group/embed-certs/serial/Stop 0.06
283 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.09
287 TestStartStop/group/default-k8s-diff-port/serial/Stop 0.06
288 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.09
300 TestStartStop/group/newest-cni/serial/DeployApp 0
301 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
302 TestStartStop/group/newest-cni/serial/Stop 0.06
303 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.09
305 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
306 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-414000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-414000: exit status 85 (90.233042ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-414000 | jenkins | v1.30.1 | 10 Jun 23 07:03 PDT |          |
	|         | -p download-only-414000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 07:03:11
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 07:03:11.789006    1338 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:03:11.789123    1338 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:03:11.789126    1338 out.go:309] Setting ErrFile to fd 2...
	I0610 07:03:11.789128    1338 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:03:11.789193    1338 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	W0610 07:03:11.789253    1338 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/15074-894/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15074-894/.minikube/config/config.json: no such file or directory
	I0610 07:03:11.790357    1338 out.go:303] Setting JSON to true
	I0610 07:03:11.806625    1338 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":161,"bootTime":1686405630,"procs":395,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:03:11.806685    1338 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:03:11.813344    1338 out.go:97] [download-only-414000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:03:11.816336    1338 out.go:169] MINIKUBE_LOCATION=15074
	W0610 07:03:11.813483    1338 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball: no such file or directory
	I0610 07:03:11.813484    1338 notify.go:220] Checking for updates...
	I0610 07:03:11.826292    1338 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:03:11.829335    1338 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:03:11.830751    1338 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:03:11.834279    1338 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	W0610 07:03:11.840286    1338 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0610 07:03:11.840474    1338 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:03:11.845394    1338 out.go:97] Using the qemu2 driver based on user configuration
	I0610 07:03:11.845414    1338 start.go:297] selected driver: qemu2
	I0610 07:03:11.845429    1338 start.go:875] validating driver "qemu2" against <nil>
	I0610 07:03:11.845494    1338 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0610 07:03:11.849305    1338 out.go:169] Automatically selected the socket_vmnet network
	I0610 07:03:11.854784    1338 start_flags.go:382] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0610 07:03:11.854879    1338 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0610 07:03:11.854916    1338 cni.go:84] Creating CNI manager for ""
	I0610 07:03:11.854932    1338 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0610 07:03:11.854937    1338 start_flags.go:319] config:
	{Name:download-only-414000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-414000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:03:11.855112    1338 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:03:11.859276    1338 out.go:97] Downloading VM boot image ...
	I0610 07:03:11.859295    1338 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/15074-894/.minikube/cache/iso/arm64/minikube-v1.30.1-1686096373-16019-arm64.iso
	I0610 07:03:27.569360    1338 out.go:97] Starting control plane node download-only-414000 in cluster download-only-414000
	I0610 07:03:27.569385    1338 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0610 07:03:27.675276    1338 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0610 07:03:27.675340    1338 cache.go:57] Caching tarball of preloaded images
	I0610 07:03:27.675546    1338 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0610 07:03:27.679639    1338 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0610 07:03:27.679648    1338 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0610 07:03:27.904149    1338 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4?checksum=md5:a000baffb0664b293d602f95ed25caa6 -> /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4
	I0610 07:03:39.125180    1338 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0610 07:03:39.125326    1338 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-arm64.tar.lz4 ...
	I0610 07:03:39.773759    1338 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0610 07:03:39.773947    1338 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/download-only-414000/config.json ...
	I0610 07:03:39.773965    1338 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/download-only-414000/config.json: {Name:mk7f5c6cd72cdeb7e4eb06700a00aabcc940f64e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0610 07:03:39.774207    1338 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0610 07:03:39.774372    1338 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 -> /Users/jenkins/minikube-integration/15074-894/.minikube/cache/darwin/arm64/v1.16.0/kubectl
	I0610 07:03:40.357149    1338 out.go:169] 
	W0610 07:03:40.361191    1338 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/arm64/kubectl.sha1 Dst:/Users/jenkins/minikube-integration/15074-894/.minikube/cache/darwin/arm64/v1.16.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106703f28 0x106703f28 0x106703f28 0x106703f28 0x106703f28 0x106703f28 0x106703f28] Decompressors:map[bz2:0x140001e78d8 gz:0x140001e7a30 tar:0x140001e78e0 tar.bz2:0x140001e78f0 tar.gz:0x140001e7900 tar.xz:0x140001e7a10 tar.zst:0x140001e7a20 tbz2:0x140001e78f0 tgz:0x140001e7900 txz:0x140001e7a10 tzst:0x140001e7a20 xz:0x140001e7a38 zip:0x140001e7a40 zst:0x140001e7ac0] Getters:map[file:0x14000e8ca70 http:0x14000a2aa50 https:0x14000a2aaa0] Dir:false ProgressListener:
<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0610 07:03:40.361222    1338 out_reason.go:110] 
	W0610 07:03:40.368104    1338 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0610 07:03:40.372166    1338 out.go:169] 
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-414000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/json-events (15.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-414000 --force --alsologtostderr --kubernetes-version=v1.27.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-414000 --force --alsologtostderr --kubernetes-version=v1.27.2 --container-runtime=docker --driver=qemu2 : (15.614325875s)
--- PASS: TestDownloadOnly/v1.27.2/json-events (15.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/preload-exists
--- PASS: TestDownloadOnly/v1.27.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/kubectl
--- PASS: TestDownloadOnly/v1.27.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-414000
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-414000: exit status 85 (74.114208ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-414000 | jenkins | v1.30.1 | 10 Jun 23 07:03 PDT |          |
	|         | -p download-only-414000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-414000 | jenkins | v1.30.1 | 10 Jun 23 07:03 PDT |          |
	|         | -p download-only-414000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.27.2   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/06/10 07:03:40
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.20.4 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0610 07:03:40.556653    1352 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:03:40.556779    1352 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:03:40.556782    1352 out.go:309] Setting ErrFile to fd 2...
	I0610 07:03:40.556785    1352 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:03:40.556852    1352 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	W0610 07:03:40.556915    1352 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/15074-894/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15074-894/.minikube/config/config.json: no such file or directory
	I0610 07:03:40.557800    1352 out.go:303] Setting JSON to true
	I0610 07:03:40.572915    1352 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":190,"bootTime":1686405630,"procs":384,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:03:40.572975    1352 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:03:40.576959    1352 out.go:97] [download-only-414000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:03:40.580800    1352 out.go:169] MINIKUBE_LOCATION=15074
	I0610 07:03:40.577075    1352 notify.go:220] Checking for updates...
	I0610 07:03:40.587786    1352 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:03:40.590791    1352 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:03:40.593815    1352 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:03:40.596781    1352 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	W0610 07:03:40.602792    1352 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0610 07:03:40.603074    1352 config.go:182] Loaded profile config "download-only-414000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0610 07:03:40.603094    1352 start.go:783] api.Load failed for download-only-414000: filestore "download-only-414000": Docker machine "download-only-414000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0610 07:03:40.603145    1352 driver.go:375] Setting default libvirt URI to qemu:///system
	W0610 07:03:40.603180    1352 start.go:783] api.Load failed for download-only-414000: filestore "download-only-414000": Docker machine "download-only-414000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0610 07:03:40.604674    1352 out.go:97] Using the qemu2 driver based on existing profile
	I0610 07:03:40.604680    1352 start.go:297] selected driver: qemu2
	I0610 07:03:40.604682    1352 start.go:875] validating driver "qemu2" against &{Name:download-only-414000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-414000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:03:40.606562    1352 cni.go:84] Creating CNI manager for ""
	I0610 07:03:40.606575    1352 cni.go:157] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0610 07:03:40.606581    1352 start_flags.go:319] config:
	{Name:download-only-414000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.27.2 ClusterName:download-only-414000 Namespace:def
ault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:03:40.606907    1352 iso.go:125] acquiring lock: {Name:mk23533b74200e808ed574dcb4cbcef9b3582f43 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0610 07:03:40.609814    1352 out.go:97] Starting control plane node download-only-414000 in cluster download-only-414000
	I0610 07:03:40.609822    1352 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:03:40.827539    1352 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.2/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 07:03:40.827628    1352 cache.go:57] Caching tarball of preloaded images
	I0610 07:03:40.828383    1352 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:03:40.832563    1352 out.go:97] Downloading Kubernetes v1.27.2 preload ...
	I0610 07:03:40.832608    1352 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 ...
	I0610 07:03:41.039413    1352 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.27.2/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4?checksum=md5:4271952d77a401a4cbcfc4225771d46f -> /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4
	I0610 07:03:51.696685    1352 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 ...
	I0610 07:03:51.696821    1352 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15074-894/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.27.2-docker-overlay2-arm64.tar.lz4 ...
	I0610 07:03:52.261573    1352 cache.go:60] Finished verifying existence of preloaded tar for  v1.27.2 on docker
	I0610 07:03:52.261640    1352 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/download-only-414000/config.json ...
	I0610 07:03:52.261921    1352 preload.go:132] Checking if preload exists for k8s version v1.27.2 and runtime docker
	I0610 07:03:52.262084    1352 download.go:107] Downloading: https://dl.k8s.io/release/v1.27.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.27.2/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/15074-894/.minikube/cache/darwin/arm64/v1.27.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-414000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.27.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-414000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.35s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-099000 --alsologtostderr --binary-mirror http://127.0.0.1:49418 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-099000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-099000
--- PASS: TestBinaryMirror (0.35s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.15s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.15s)

                                                
                                    
x
+
TestErrorSpam/setup (29.69s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-404000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-404000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-404000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-404000 --driver=qemu2 : (29.68671725s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.25.9, which may have incompatibilities with Kubernetes 1.27.2."
--- PASS: TestErrorSpam/setup (29.69s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-404000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-404000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-404000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-404000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-404000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-404000 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-404000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-404000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-404000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-404000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-404000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-404000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-404000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-404000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-404000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-404000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-404000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-404000 pause
--- PASS: TestErrorSpam/pause (0.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-404000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-404000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-404000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-404000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-404000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-404000 unpause
--- PASS: TestErrorSpam/unpause (0.64s)

                                                
                                    
x
+
TestErrorSpam/stop (3.23s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-404000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-404000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-404000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-404000 stop: (3.0671185s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-404000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-404000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-404000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-404000 stop
--- PASS: TestErrorSpam/stop (3.23s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1850: local sync path: /Users/jenkins/minikube-integration/15074-894/.minikube/files/etc/test/nested/copy/1336/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (55.39s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2229: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-922000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2229: (dbg) Done: out/minikube-darwin-arm64 start -p functional-922000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (55.3931665s)
--- PASS: TestFunctional/serial/StartWithProxy (55.39s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.85s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-922000 --alsologtostderr -v=8
functional_test.go:654: (dbg) Done: out/minikube-darwin-arm64 start -p functional-922000 --alsologtostderr -v=8: (36.850545833s)
functional_test.go:658: soft start took 36.850993625s for "functional-922000" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.85s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-922000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.8s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 cache add registry.k8s.io/pause:3.1
functional_test.go:1044: (dbg) Done: out/minikube-darwin-arm64 -p functional-922000 cache add registry.k8s.io/pause:3.1: (2.150876917s)
functional_test.go:1044: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 cache add registry.k8s.io/pause:3.3
functional_test.go:1044: (dbg) Done: out/minikube-darwin-arm64 -p functional-922000 cache add registry.k8s.io/pause:3.3: (2.003240375s)
functional_test.go:1044: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 cache add registry.k8s.io/pause:latest
functional_test.go:1044: (dbg) Done: out/minikube-darwin-arm64 -p functional-922000 cache add registry.k8s.io/pause:latest: (1.643303875s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.80s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-922000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local63749151/001
functional_test.go:1084: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 cache add minikube-local-cache-test:functional-922000
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 cache delete minikube-local-cache-test:functional-922000
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-922000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1097: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1148: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-922000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (67.266834ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 cache reload
functional_test.go:1153: (dbg) Done: out/minikube-darwin-arm64 -p functional-922000 cache reload: (1.030663125s)
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 kubectl -- --context functional-922000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.49s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.56s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out/kubectl --context functional-922000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.56s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (36.85s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-922000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:752: (dbg) Done: out/minikube-darwin-arm64 start -p functional-922000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (36.849848209s)
functional_test.go:756: restart took 36.849959833s for "functional-922000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (36.85s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-922000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:820: etcd phase: Running
functional_test.go:830: etcd status: Ready
functional_test.go:820: kube-apiserver phase: Running
functional_test.go:830: kube-apiserver status: Ready
functional_test.go:820: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager status: Ready
functional_test.go:820: kube-scheduler phase: Running
functional_test.go:830: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd973957933/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-922000 config get cpus: exit status 14 (28.463125ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 config set cpus 2
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-922000 config get cpus: exit status 14 (27.516041ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-922000 --alsologtostderr -v=1]
functional_test.go:905: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-922000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1923: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.82s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-922000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:969: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-922000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (107.860542ms)

                                                
                                                
-- stdout --
	* [functional-922000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:09:08.593265    1904 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:09:08.593378    1904 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:09:08.593381    1904 out.go:309] Setting ErrFile to fd 2...
	I0610 07:09:08.593383    1904 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:09:08.593459    1904 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:09:08.594545    1904 out.go:303] Setting JSON to false
	I0610 07:09:08.610282    1904 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":518,"bootTime":1686405630,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:09:08.610366    1904 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:09:08.614239    1904 out.go:177] * [functional-922000] minikube v1.30.1 on Darwin 13.4 (arm64)
	I0610 07:09:08.621130    1904 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:09:08.621222    1904 notify.go:220] Checking for updates...
	I0610 07:09:08.625245    1904 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:09:08.628254    1904 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:09:08.629762    1904 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:09:08.633206    1904 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:09:08.636223    1904 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:09:08.639457    1904 config.go:182] Loaded profile config "functional-922000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:09:08.639688    1904 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:09:08.644192    1904 out.go:177] * Using the qemu2 driver based on existing profile
	I0610 07:09:08.651223    1904 start.go:297] selected driver: qemu2
	I0610 07:09:08.651229    1904 start.go:875] validating driver "qemu2" against &{Name:functional-922000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:functional-922000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:09:08.651290    1904 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:09:08.657182    1904 out.go:177] 
	W0610 07:09:08.661224    1904 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0610 07:09:08.665165    1904 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-922000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-922000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-922000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (108.571208ms)

                                                
                                                
-- stdout --
	* [functional-922000] minikube v1.30.1 sur Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0610 07:09:08.479465    1900 out.go:296] Setting OutFile to fd 1 ...
	I0610 07:09:08.479569    1900 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:09:08.479572    1900 out.go:309] Setting ErrFile to fd 2...
	I0610 07:09:08.479575    1900 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0610 07:09:08.479657    1900 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
	I0610 07:09:08.481032    1900 out.go:303] Setting JSON to false
	I0610 07:09:08.498545    1900 start.go:127] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":518,"bootTime":1686405630,"procs":404,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.4","kernelVersion":"22.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0610 07:09:08.498639    1900 start.go:135] gopshost.Virtualization returned error: not implemented yet
	I0610 07:09:08.503363    1900 out.go:177] * [functional-922000] minikube v1.30.1 sur Darwin 13.4 (arm64)
	I0610 07:09:08.510224    1900 out.go:177]   - MINIKUBE_LOCATION=15074
	I0610 07:09:08.513283    1900 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	I0610 07:09:08.510280    1900 notify.go:220] Checking for updates...
	I0610 07:09:08.519228    1900 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0610 07:09:08.522261    1900 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0610 07:09:08.525202    1900 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	I0610 07:09:08.528234    1900 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0610 07:09:08.531560    1900 config.go:182] Loaded profile config "functional-922000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
	I0610 07:09:08.531780    1900 driver.go:375] Setting default libvirt URI to qemu:///system
	I0610 07:09:08.536175    1900 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0610 07:09:08.543280    1900 start.go:297] selected driver: qemu2
	I0610 07:09:08.543284    1900 start.go:875] validating driver "qemu2" against &{Name:functional-922000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16019/minikube-v1.30.1-1686096373-16019-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.39-1686006988-16632@sha256:412dc5cf58908f3565f59ed5f2b8341f53e998f8d8b54f59253c8f8f335f5a7b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.27.2 ClusterName:functional-922000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.27.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false E
xtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:}
	I0610 07:09:08.543356    1900 start.go:886] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0610 07:09:08.550207    1900 out.go:177] 
	W0610 07:09:08.554273    1900 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0610 07:09:08.557249    1900 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 status
functional_test.go:855: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:867: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1688: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 addons list
functional_test.go:1700: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2bceeb6a-f269-4715-8ccd-234fa86f7c70] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.020274167s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-922000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-922000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-922000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-922000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [74870614-5a83-4d52-847c-74e3b29533fa] Pending
helpers_test.go:344: "sp-pod" [74870614-5a83-4d52-847c-74e3b29533fa] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [74870614-5a83-4d52-847c-74e3b29533fa] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.009210708s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-922000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-922000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-922000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [244a746c-0df9-44c4-8aed-de46e291c860] Pending
helpers_test.go:344: "sp-pod" [244a746c-0df9-44c4-8aed-de46e291c860] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [244a746c-0df9-44c4-8aed-de46e291c860] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00570925s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-922000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1723: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh "echo hello"
functional_test.go:1740: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh -n functional-922000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 cp functional-922000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd1423285433/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh -n functional-922000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1924: Checking for existence of /etc/test/nested/copy/1336/hosts within VM
functional_test.go:1926: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh "sudo cat /etc/test/nested/copy/1336/hosts"
functional_test.go:1931: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1967: Checking for existence of /etc/ssl/certs/1336.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh "sudo cat /etc/ssl/certs/1336.pem"
functional_test.go:1967: Checking for existence of /usr/share/ca-certificates/1336.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh "sudo cat /usr/share/ca-certificates/1336.pem"
functional_test.go:1967: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1968: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/13362.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh "sudo cat /etc/ssl/certs/13362.pem"
functional_test.go:1994: Checking for existence of /usr/share/ca-certificates/13362.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh "sudo cat /usr/share/ca-certificates/13362.pem"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1995: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:217: (dbg) Run:  kubectl --context functional-922000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2022: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh "sudo systemctl is-active crio"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-922000 ssh "sudo systemctl is-active crio": exit status 1 (69.057625ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2283: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-922000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-922000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-922000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-922000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1747: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-922000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-922000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [032bdb31-8972-42ca-8eb3-fb538359dc97] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [032bdb31-8972-42ca-8eb3-fb538359dc97] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.006411125s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-922000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.190.35 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-922000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-922000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1443: (dbg) Run:  kubectl --context functional-922000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-7b684b55f9-tdk7f" [684cdb84-734b-4dca-87c1-6c5a68e88b2d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-7b684b55f9-tdk7f" [684cdb84-734b-4dca-87c1-6c5a68e88b2d] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.010957458s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1457: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1487: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 service list -o json
functional_test.go:1492: Took "295.897291ms" to run "out/minikube-darwin-arm64 -p functional-922000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1507: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 service --namespace=default --https --url hello-node
functional_test.go:1520: found endpoint: https://192.168.105.4:32518
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1538: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1557: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 service hello-node --url
functional_test.go:1563: found endpoint for hello-node: http://192.168.105.4:32518
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1273: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1313: Took "116.183917ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1327: Took "32.718125ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1364: Took "109.524458ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1377: Took "31.427333ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-922000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port287119867/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1686406125887724000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port287119867/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1686406125887724000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port287119867/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1686406125887724000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port287119867/001/test-1686406125887724000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-922000 ssh "findmnt -T /mount-9p | grep 9p": exit status 80 (48.433375ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/15074-894/.minikube/machines/functional-922000/monitor: connect: connection refused
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_mount_65240357e3dedbf116fb2ea940a6d789a704a314_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun 10 14:08 created-by-test
-rw-r--r-- 1 docker docker 24 Jun 10 14:08 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun 10 14:08 test-1686406125887724000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh cat /mount-9p/test-1686406125887724000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-922000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [3312e9f9-36d0-42a8-9de2-5b88bb3da415] Pending
helpers_test.go:344: "busybox-mount" [3312e9f9-36d0-42a8-9de2-5b88bb3da415] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [3312e9f9-36d0-42a8-9de2-5b88bb3da415] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [3312e9f9-36d0-42a8-9de2-5b88bb3da415] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.007811875s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-922000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-922000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port287119867/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.10s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-922000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3024479691/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-922000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (64.322958ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-922000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3024479691/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-922000 ssh "sudo umount -f /mount-9p": exit status 1 (61.094417ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-922000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-922000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port3024479691/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2251: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2265: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:259: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 image ls --format short --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-922000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.27.2
registry.k8s.io/kube-proxy:v1.27.2
registry.k8s.io/kube-controller-manager:v1.27.2
registry.k8s.io/kube-apiserver:v1.27.2
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-922000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-922000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:267: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-922000 image ls --format short --alsologtostderr:
I0610 07:09:23.818921    2063 out.go:296] Setting OutFile to fd 1 ...
I0610 07:09:23.819755    2063 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 07:09:23.819760    2063 out.go:309] Setting ErrFile to fd 2...
I0610 07:09:23.819762    2063 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 07:09:23.819836    2063 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
I0610 07:09:23.820222    2063 config.go:182] Loaded profile config "functional-922000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0610 07:09:23.820280    2063 config.go:182] Loaded profile config "functional-922000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0610 07:09:23.821136    2063 ssh_runner.go:195] Run: systemctl --version
I0610 07:09:23.821146    2063 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/functional-922000/id_rsa Username:docker}
I0610 07:09:23.850905    2063 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:259: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 image ls --format table --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-922000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.27.2           | 305d7ed1dae28 | 56.2MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| gcr.io/google-containers/addon-resizer      | functional-922000 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/nginx                     | alpine            | 5ee47dcca7543 | 41MB   |
| registry.k8s.io/kube-controller-manager     | v1.27.2           | 2ee705380c3c5 | 107MB  |
| registry.k8s.io/kube-proxy                  | v1.27.2           | 29921a0845422 | 66.5MB |
| registry.k8s.io/etcd                        | 3.5.7-0           | 24bc64e911039 | 181MB  |
| docker.io/library/minikube-local-cache-test | functional-922000 | 053162da2fe9b | 30B    |
| registry.k8s.io/coredns/coredns             | v1.10.1           | 97e04611ad434 | 51.4MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/library/nginx                     | latest            | c42efe0b54387 | 135MB  |
| registry.k8s.io/kube-apiserver              | v1.27.2           | 72c9df6be7f1b | 115MB  |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:267: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-922000 image ls --format table --alsologtostderr:
I0610 07:09:23.941342    2071 out.go:296] Setting OutFile to fd 1 ...
I0610 07:09:23.941576    2071 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 07:09:23.941578    2071 out.go:309] Setting ErrFile to fd 2...
I0610 07:09:23.941581    2071 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 07:09:23.941658    2071 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
I0610 07:09:23.942086    2071 config.go:182] Loaded profile config "functional-922000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0610 07:09:23.942149    2071 config.go:182] Loaded profile config "functional-922000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0610 07:09:23.943156    2071 ssh_runner.go:195] Run: systemctl --version
I0610 07:09:23.943171    2071 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/functional-922000/id_rsa Username:docker}
I0610 07:09:23.973254    2071 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:259: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 image ls --format yaml --alsologtostderr
functional_test.go:264: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-922000 image ls --format yaml --alsologtostderr:
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: c42efe0b54387756e68d167a437aef21451f63eebd9330bb555367d67128386c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "135000000"
- id: 29921a084542255eb81a1a660a603b1a24636d88b202f9010daae75fa32754c0
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.27.2
size: "66500000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-922000
size: "32900000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 305d7ed1dae2877c3a80d434c5fb9f1aac1aa3d2431c36130a3fcd1970e93840
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.27.2
size: "56200000"
- id: 97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "51400000"
- id: 24bc64e911039ecf00e263be2161797c758b7d82403ca5516ab64047a477f737
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.7-0
size: "181000000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 053162da2fe9bbe265a311caac911062330f37de5b5c12792fc18e035d3e3cbc
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-922000
size: "30"
- id: 5ee47dcca7543750b3941b52e98f103bbbae9aaf574ab4dc018e1e7d34e505ad
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "41000000"
- id: 72c9df6be7f1b997e4a31b5cb9aa7262e5278905af97e6a69e341e3f0f9bbaae
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.27.2
size: "115000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 2ee705380c3c59a538b853cbe9ae9886ebbd0001a4cea4add5adeea48e5f48d4
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.27.2
size: "107000000"

                                                
                                                
functional_test.go:267: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-922000 image ls --format yaml --alsologtostderr:
I0610 07:09:23.818869    2064 out.go:296] Setting OutFile to fd 1 ...
I0610 07:09:23.819769    2064 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 07:09:23.819772    2064 out.go:309] Setting ErrFile to fd 2...
I0610 07:09:23.819774    2064 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 07:09:23.819846    2064 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
I0610 07:09:23.820217    2064 config.go:182] Loaded profile config "functional-922000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0610 07:09:23.820278    2064 config.go:182] Loaded profile config "functional-922000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0610 07:09:23.821586    2064 ssh_runner.go:195] Run: systemctl --version
I0610 07:09:23.821599    2064 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/functional-922000/id_rsa Username:docker}
I0610 07:09:23.850967    2064 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:306: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh pgrep buildkitd
functional_test.go:306: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-922000 ssh pgrep buildkitd: exit status 1 (70.818959ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 image build -t localhost/my-image:functional-922000 testdata/build --alsologtostderr
functional_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p functional-922000 image build -t localhost/my-image:functional-922000 testdata/build --alsologtostderr: (2.5287725s)
functional_test.go:318: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-922000 image build -t localhost/my-image:functional-922000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in 10d9cc947b0e
Removing intermediate container 10d9cc947b0e
---> e2b79a5692fa
Step 3/3 : ADD content.txt /
---> 2c7362ebbd2d
Successfully built 2c7362ebbd2d
Successfully tagged localhost/my-image:functional-922000
functional_test.go:321: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-922000 image build -t localhost/my-image:functional-922000 testdata/build --alsologtostderr:
I0610 07:09:23.967997    2073 out.go:296] Setting OutFile to fd 1 ...
I0610 07:09:23.968194    2073 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 07:09:23.968197    2073 out.go:309] Setting ErrFile to fd 2...
I0610 07:09:23.968200    2073 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0610 07:09:23.968282    2073 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15074-894/.minikube/bin
I0610 07:09:23.968674    2073 config.go:182] Loaded profile config "functional-922000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0610 07:09:23.969383    2073 config.go:182] Loaded profile config "functional-922000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.27.2
I0610 07:09:23.970171    2073 ssh_runner.go:195] Run: systemctl --version
I0610 07:09:23.970181    2073 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/15074-894/.minikube/machines/functional-922000/id_rsa Username:docker}
I0610 07:09:24.000540    2073 build_images.go:151] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.4190805356.tar
I0610 07:09:24.000621    2073 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0610 07:09:24.003486    2073 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4190805356.tar
I0610 07:09:24.005052    2073 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4190805356.tar: stat -c "%s %y" /var/lib/minikube/build/build.4190805356.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4190805356.tar': No such file or directory
I0610 07:09:24.005071    2073 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.4190805356.tar --> /var/lib/minikube/build/build.4190805356.tar (3072 bytes)
I0610 07:09:24.012757    2073 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4190805356
I0610 07:09:24.015496    2073 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4190805356 -xf /var/lib/minikube/build/build.4190805356.tar
I0610 07:09:24.018567    2073 docker.go:336] Building image: /var/lib/minikube/build/build.4190805356
I0610 07:09:24.018609    2073 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-922000 /var/lib/minikube/build/build.4190805356
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0610 07:09:26.457472    2073 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-922000 /var/lib/minikube/build/build.4190805356: (2.438933917s)
I0610 07:09:26.457540    2073 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4190805356
I0610 07:09:26.460325    2073 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4190805356.tar
I0610 07:09:26.462955    2073 build_images.go:207] Built localhost/my-image:functional-922000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.4190805356.tar
I0610 07:09:26.462970    2073 build_images.go:123] succeeded building to: functional-922000
I0610 07:09:26.462972    2073 build_images.go:124] failed building to: 
functional_test.go:446: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:340: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:340: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.731336041s)
functional_test.go:345: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-922000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:353: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 image load --daemon gcr.io/google-containers/addon-resizer:functional-922000 --alsologtostderr
functional_test.go:353: (dbg) Done: out/minikube-darwin-arm64 -p functional-922000 image load --daemon gcr.io/google-containers/addon-resizer:functional-922000 --alsologtostderr: (2.172721542s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 image load --daemon gcr.io/google-containers/addon-resizer:functional-922000 --alsologtostderr
functional_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p functional-922000 image load --daemon gcr.io/google-containers/addon-resizer:functional-922000 --alsologtostderr: (1.506854291s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:233: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
2023/06/10 07:09:17 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:233: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.491214541s)
functional_test.go:238: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-922000
functional_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 image load --daemon gcr.io/google-containers/addon-resizer:functional-922000 --alsologtostderr
functional_test.go:243: (dbg) Done: out/minikube-darwin-arm64 -p functional-922000 image load --daemon gcr.io/google-containers/addon-resizer:functional-922000 --alsologtostderr: (1.887146292s)
functional_test.go:446: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.49s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:494: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-922000 docker-env) && out/minikube-darwin-arm64 status -p functional-922000"
functional_test.go:517: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-922000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2114: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2114: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2114: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:378: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 image save gcr.io/google-containers/addon-resizer:functional-922000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 image rm gcr.io/google-containers/addon-resizer:functional-922000 --alsologtostderr
functional_test.go:446: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:407: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:446: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:417: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-922000
functional_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 image save --daemon gcr.io/google-containers/addon-resizer:functional-922000 --alsologtostderr
functional_test.go:422: (dbg) Done: out/minikube-darwin-arm64 -p functional-922000 image save --daemon gcr.io/google-containers/addon-resizer:functional-922000 --alsologtostderr: (1.497305792s)
functional_test.go:427: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-922000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.58s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-922000
--- PASS: TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:196: (dbg) Run:  docker rmi -f localhost/my-image:functional-922000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:204: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-922000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (29.38s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-734000 --driver=qemu2 
image_test.go:69: (dbg) Done: out/minikube-darwin-arm64 start -p image-734000 --driver=qemu2 : (29.383175083s)
--- PASS: TestImageBuild/serial/Setup (29.38s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.99s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-734000
image_test.go:78: (dbg) Done: out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-734000: (1.993400917s)
--- PASS: TestImageBuild/serial/NormalBuild (1.99s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.11s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-734000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.11s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.1s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-734000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.10s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (84.41s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-arm64 start -p ingress-addon-legacy-433000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-darwin-arm64 start -p ingress-addon-legacy-433000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=qemu2 : (1m24.409212209s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (84.41s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (20.82s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-433000 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-darwin-arm64 -p ingress-addon-legacy-433000 addons enable ingress --alsologtostderr -v=5: (20.818201292s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (20.82s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.21s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-arm64 -p ingress-addon-legacy-433000 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.21s)

                                                
                                    
x
+
TestJSONOutput/start/Command (43.75s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-537000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
E0610 07:13:14.349529    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/functional-922000/client.crt: no such file or directory
E0610 07:13:14.356409    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/functional-922000/client.crt: no such file or directory
E0610 07:13:14.368436    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/functional-922000/client.crt: no such file or directory
E0610 07:13:14.390463    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/functional-922000/client.crt: no such file or directory
E0610 07:13:14.430874    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/functional-922000/client.crt: no such file or directory
E0610 07:13:14.512948    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/functional-922000/client.crt: no such file or directory
E0610 07:13:14.675051    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/functional-922000/client.crt: no such file or directory
E0610 07:13:14.997139    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/functional-922000/client.crt: no such file or directory
E0610 07:13:15.639363    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/functional-922000/client.crt: no such file or directory
E0610 07:13:16.920102    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/functional-922000/client.crt: no such file or directory
E0610 07:13:19.482178    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/functional-922000/client.crt: no such file or directory
E0610 07:13:24.604158    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/functional-922000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 start -p json-output-537000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : (43.746190125s)
--- PASS: TestJSONOutput/start/Command (43.75s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.29s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-537000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.29s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.23s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-537000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.23s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (9.08s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-537000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-537000 --output=json --user=testUser: (9.079577459s)
--- PASS: TestJSONOutput/stop/Command (9.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.35s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-075000 --memory=2200 --output=json --wait=true --driver=fail
E0610 07:13:34.846079    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/functional-922000/client.crt: no such file or directory
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-075000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (92.01975ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a2e9d050-f35a-4f36-9700-e6fb6072eb66","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-075000] minikube v1.30.1 on Darwin 13.4 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e35b4c7b-07c8-4780-acc6-4318fc0d076d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15074"}}
	{"specversion":"1.0","id":"c1ffa65f-cffe-4bd8-9061-181ebf1551bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig"}}
	{"specversion":"1.0","id":"5508bebf-f3c3-45d2-9cdc-0e12b8eda1a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"f8cec044-2d74-4ed3-ab26-954e2bca9ff4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3b34c19f-0518-4588-b1ce-e94d39762d19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube"}}
	{"specversion":"1.0","id":"bf6c7753-486b-47fa-859f-908aeb64930a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"d4dd5794-809e-4474-80e6-0e9b8ab58afb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-075000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-075000
--- PASS: TestErrorJSONOutput (0.35s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestMinikubeProfile (61.5s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-037000 --driver=qemu2 
E0610 07:13:55.327606    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/functional-922000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p first-037000 --driver=qemu2 : (28.898649084s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p second-038000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-arm64 start -p second-038000 --driver=qemu2 : (31.845291834s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile first-037000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 profile second-038000
E0610 07:14:36.288272    1336 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15074-894/.minikube/profiles/functional-922000/client.crt: no such file or directory
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-038000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-038000
helpers_test.go:175: Cleaning up "first-037000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-037000
--- PASS: TestMinikubeProfile (61.50s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-817000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-817000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (93.629792ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-817000] minikube v1.30.1 on Darwin 13.4 (arm64)
	  - MINIKUBE_LOCATION=15074
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15074-894/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15074-894/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-817000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-817000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (42.358083ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-817000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-817000
--- PASS: TestNoKubernetes/serial/Stop (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-817000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-817000 "sudo systemctl is-active --quiet service kubelet": exit status 89 (41.26175ms)

                                                
                                                
-- stdout --
	* The control plane node must be running for this command
	  To start a cluster, run: "minikube start -p NoKubernetes-817000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-485000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-485000 -n old-k8s-version-485000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-485000 -n old-k8s-version-485000: exit status 7 (27.5895ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-485000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-457000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/no-preload/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-457000 -n no-preload-457000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-457000 -n no-preload-457000: exit status 7 (28.675958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-457000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-530000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/embed-certs/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-530000 -n embed-certs-530000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-530000 -n embed-certs-530000: exit status 7 (28.034709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-530000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-693000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-693000 -n default-k8s-diff-port-693000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-693000 -n default-k8s-diff-port-693000: exit status 7 (27.016916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-693000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-675000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-675000 --alsologtostderr -v=3
--- PASS: TestStartStop/group/newest-cni/serial/Stop (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-675000 -n newest-cni-675000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-675000 -n newest-cni-675000: exit status 7 (29.545375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-675000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (20/242)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.27.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.27.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.27.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.27.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1782: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (15.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-922000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup761318276/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-922000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup761318276/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-922000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup761318276/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-922000 ssh "findmnt -T" /mount1: exit status 80 (81.299625ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get machine status: state: connect: dial unix /Users/jenkins/minikube-integration/15074-894/.minikube/machines/functional-922000/monitor: connect: connection refused
	* 
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_mount_92709bf4d8eeceefad7eb6bf1d1cc00e38564887_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-922000 ssh "findmnt -T" /mount1: exit status 1 (104.701584ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-922000 ssh "findmnt -T" /mount1: exit status 1 (68.518333ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-922000 ssh "findmnt -T" /mount1: exit status 1 (104.618125ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-922000 ssh "findmnt -T" /mount1: exit status 1 (104.248833ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-922000 ssh "findmnt -T" /mount1: exit status 1 (101.950959ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-922000 ssh "findmnt -T" /mount1: exit status 1 (104.418375ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-922000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-922000 ssh "findmnt -T" /mount1: exit status 1 (60.736625ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-922000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup761318276/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-922000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup761318276/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-922000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup761318276/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (15.24s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:296: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-176000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-176000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-176000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-176000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-176000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-176000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-176000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-176000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-176000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-176000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-176000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-176000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-176000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-176000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-176000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-176000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-176000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-176000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-176000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-176000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-176000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-176000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-176000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-176000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-176000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-176000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-176000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-176000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-176000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-176000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-176000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-176000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-176000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-176000"

                                                
                                                
----------------------- debugLogs end: cilium-176000 [took: 2.149172125s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-176000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-176000
--- SKIP: TestNetworkPlugins/group/cilium (2.38s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-822000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-822000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
Copied to clipboard